00:00:00.000 Started by upstream project "autotest-nightly" build number 3880 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3260 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.117 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.117 The recommended git tool is: git 00:00:00.117 using credential 00000000-0000-0000-0000-000000000002 00:00:00.119 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.172 Fetching changes from the remote Git repository 00:00:00.173 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.215 Using shallow fetch with depth 1 00:00:00.215 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.215 > git --version # timeout=10 00:00:00.248 > git --version # 'git version 2.39.2' 00:00:00.248 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.272 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.272 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.972 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.983 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.994 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:06.994 > git config core.sparsecheckout # timeout=10 00:00:07.006 > git read-tree -mu HEAD # timeout=10 00:00:07.023 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:07.046 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:07.046 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:07.147 [Pipeline] Start of Pipeline 00:00:07.160 [Pipeline] library 00:00:07.162 Loading library shm_lib@master 00:00:07.162 Library shm_lib@master is cached. Copying from home. 00:00:07.175 [Pipeline] node 00:00:07.186 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.188 [Pipeline] { 00:00:07.197 [Pipeline] catchError 00:00:07.199 [Pipeline] { 00:00:07.209 [Pipeline] wrap 00:00:07.217 [Pipeline] { 00:00:07.223 [Pipeline] stage 00:00:07.225 [Pipeline] { (Prologue) 00:00:07.240 [Pipeline] echo 00:00:07.241 Node: VM-host-SM9 00:00:07.246 [Pipeline] cleanWs 00:00:07.254 [WS-CLEANUP] Deleting project workspace... 00:00:07.254 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.261 [WS-CLEANUP] done 00:00:07.452 [Pipeline] setCustomBuildProperty 00:00:07.516 [Pipeline] httpRequest 00:00:07.545 [Pipeline] echo 00:00:07.547 Sorcerer 10.211.164.101 is alive 00:00:07.555 [Pipeline] httpRequest 00:00:07.560 HttpMethod: GET 00:00:07.560 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:07.561 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:07.571 Response Code: HTTP/1.1 200 OK 00:00:07.572 Success: Status code 200 is in the accepted range: 200,404 00:00:07.576 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:08.876 [Pipeline] sh 00:00:09.162 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:09.179 [Pipeline] httpRequest 00:00:09.211 [Pipeline] echo 00:00:09.213 Sorcerer 10.211.164.101 is alive 00:00:09.220 [Pipeline] httpRequest 00:00:09.225 HttpMethod: GET 00:00:09.225 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:09.226 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:09.226 Response Code: HTTP/1.1 200 OK 00:00:09.227 Success: Status code 200 is in the accepted range: 200,404 00:00:09.227 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:30.905 [Pipeline] sh 00:00:31.184 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:33.728 [Pipeline] sh 00:00:34.010 + git -C spdk log --oneline -n5 00:00:34.010 719d03c6a sock/uring: only register net impl if supported 00:00:34.010 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:34.010 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:34.010 6c7c1f57e accel: add sequence outstanding stat 00:00:34.010 3bc8e6a26 accel: add utility to put task 00:00:34.059 [Pipeline] writeFile 00:00:34.076 [Pipeline] sh 00:00:34.357 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:34.371 [Pipeline] sh 00:00:34.652 + cat autorun-spdk.conf 00:00:34.652 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.652 SPDK_TEST_NVME=1 00:00:34.652 SPDK_TEST_FTL=1 00:00:34.652 SPDK_TEST_ISAL=1 00:00:34.652 SPDK_RUN_ASAN=1 00:00:34.652 SPDK_RUN_UBSAN=1 00:00:34.652 SPDK_TEST_XNVME=1 00:00:34.652 SPDK_TEST_NVME_FDP=1 00:00:34.652 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:34.660 RUN_NIGHTLY=1 00:00:34.662 [Pipeline] } 00:00:34.684 [Pipeline] // stage 00:00:34.702 [Pipeline] stage 00:00:34.705 [Pipeline] { (Run VM) 00:00:34.723 [Pipeline] sh 00:00:35.004 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:35.004 + echo 'Start stage prepare_nvme.sh' 00:00:35.004 Start stage prepare_nvme.sh 00:00:35.004 + [[ -n 4 ]] 00:00:35.004 + disk_prefix=ex4 00:00:35.004 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:35.004 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:35.004 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:35.004 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.004 ++ SPDK_TEST_NVME=1 00:00:35.004 ++ SPDK_TEST_FTL=1 00:00:35.004 ++ SPDK_TEST_ISAL=1 00:00:35.004 ++ SPDK_RUN_ASAN=1 00:00:35.004 ++ SPDK_RUN_UBSAN=1 00:00:35.004 ++ SPDK_TEST_XNVME=1 00:00:35.004 ++ SPDK_TEST_NVME_FDP=1 00:00:35.004 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:35.004 ++ RUN_NIGHTLY=1 00:00:35.004 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:35.004 + nvme_files=() 00:00:35.004 + declare -A nvme_files 00:00:35.004 + backend_dir=/var/lib/libvirt/images/backends 00:00:35.004 + nvme_files['nvme.img']=5G 00:00:35.004 + nvme_files['nvme-cmb.img']=5G 00:00:35.004 + nvme_files['nvme-multi0.img']=4G 00:00:35.004 + nvme_files['nvme-multi1.img']=4G 00:00:35.004 + nvme_files['nvme-multi2.img']=4G 00:00:35.004 + nvme_files['nvme-openstack.img']=8G 00:00:35.004 + nvme_files['nvme-zns.img']=5G 00:00:35.004 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:35.004 + (( SPDK_TEST_FTL == 1 )) 00:00:35.004 + nvme_files["nvme-ftl.img"]=6G 00:00:35.004 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:35.004 + nvme_files["nvme-fdp.img"]=1G 00:00:35.004 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:35.004 + for nvme in "${!nvme_files[@]}" 00:00:35.004 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:00:35.004 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:35.004 + for nvme in "${!nvme_files[@]}" 00:00:35.004 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:00:35.263 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:35.263 + for nvme in "${!nvme_files[@]}" 00:00:35.263 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:00:35.263 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:35.263 + for nvme in "${!nvme_files[@]}" 00:00:35.263 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:00:35.523 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:35.523 + for nvme in "${!nvme_files[@]}" 00:00:35.523 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:00:35.523 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:35.523 + for nvme in "${!nvme_files[@]}" 00:00:35.523 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:00:35.782 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:35.782 + for nvme in "${!nvme_files[@]}" 00:00:35.782 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:00:36.041 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:36.041 + for nvme in "${!nvme_files[@]}" 00:00:36.041 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:00:36.041 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:36.041 + for nvme in "${!nvme_files[@]}" 00:00:36.041 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:00:36.300 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:36.300 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:00:36.300 + echo 'End stage prepare_nvme.sh' 00:00:36.300 End stage prepare_nvme.sh 00:00:36.312 [Pipeline] sh 00:00:36.592 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:36.592 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:00:36.851 00:00:36.851 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:36.851 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:36.851 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:36.851 HELP=0 00:00:36.851 DRY_RUN=0 00:00:36.851 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:00:36.851 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:36.851 NVME_AUTO_CREATE=0 00:00:36.851 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:00:36.851 NVME_CMB=,,,, 00:00:36.851 NVME_PMR=,,,, 00:00:36.851 NVME_ZNS=,,,, 00:00:36.851 NVME_MS=true,,,, 00:00:36.851 NVME_FDP=,,,on, 00:00:36.851 SPDK_VAGRANT_DISTRO=fedora38 00:00:36.851 SPDK_VAGRANT_VMCPU=10 00:00:36.851 SPDK_VAGRANT_VMRAM=12288 00:00:36.851 SPDK_VAGRANT_PROVIDER=libvirt 00:00:36.851 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:36.851 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:36.851 SPDK_OPENSTACK_NETWORK=0 00:00:36.851 VAGRANT_PACKAGE_BOX=0 00:00:36.851 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:36.851 FORCE_DISTRO=true 00:00:36.851 VAGRANT_BOX_VERSION= 00:00:36.851 EXTRA_VAGRANTFILES= 00:00:36.851 NIC_MODEL=e1000 00:00:36.851 00:00:36.851 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:00:36.851 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:40.142 Bringing machine 'default' up with 'libvirt' provider... 00:00:40.400 ==> default: Creating image (snapshot of base box volume). 00:00:40.658 ==> default: Creating domain with the following settings... 00:00:40.658 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720710533_20dd56a65e596db06716 00:00:40.658 ==> default: -- Domain type: kvm 00:00:40.658 ==> default: -- Cpus: 10 00:00:40.658 ==> default: -- Feature: acpi 00:00:40.658 ==> default: -- Feature: apic 00:00:40.658 ==> default: -- Feature: pae 00:00:40.658 ==> default: -- Memory: 12288M 00:00:40.658 ==> default: -- Memory Backing: hugepages: 00:00:40.658 ==> default: -- Management MAC: 00:00:40.658 ==> default: -- Loader: 00:00:40.658 ==> default: -- Nvram: 00:00:40.658 ==> default: -- Base box: spdk/fedora38 00:00:40.658 ==> default: -- Storage pool: default 00:00:40.658 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720710533_20dd56a65e596db06716.img (20G) 00:00:40.658 ==> default: -- Volume Cache: default 00:00:40.658 ==> default: -- Kernel: 00:00:40.658 ==> default: -- Initrd: 00:00:40.658 ==> default: -- Graphics Type: vnc 00:00:40.658 ==> default: -- Graphics Port: -1 00:00:40.658 ==> default: -- Graphics IP: 127.0.0.1 00:00:40.658 ==> default: -- Graphics Password: Not defined 00:00:40.658 ==> default: -- Video Type: cirrus 00:00:40.658 ==> default: -- Video VRAM: 9216 00:00:40.658 ==> default: -- Sound Type: 00:00:40.658 ==> default: -- Keymap: en-us 00:00:40.658 ==> default: -- TPM Path: 00:00:40.658 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:40.658 ==> default: -- Command line args: 00:00:40.658 ==> default: -> value=-device, 00:00:40.658 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:40.658 ==> default: -> value=-drive, 00:00:40.658 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:40.658 ==> default: -> value=-device, 00:00:40.658 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:40.658 ==> default: -> value=-device, 00:00:40.658 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:40.658 ==> default: -> value=-drive, 00:00:40.658 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:00:40.658 ==> default: -> value=-device, 00:00:40.658 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.658 ==> default: -> value=-device, 00:00:40.658 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:40.658 ==> default: -> value=-drive, 00:00:40.658 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:40.658 ==> default: -> value=-device, 00:00:40.658 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.658 ==> default: -> value=-drive, 00:00:40.658 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:40.658 ==> default: -> value=-device, 00:00:40.658 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.658 ==> default: -> value=-drive, 00:00:40.658 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:40.658 ==> default: -> value=-device, 00:00:40.658 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.658 ==> default: -> value=-device, 00:00:40.658 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:40.658 ==> default: -> value=-device, 00:00:40.658 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:40.658 ==> default: -> value=-drive, 00:00:40.658 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:40.658 ==> default: -> value=-device, 00:00:40.658 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:40.658 ==> default: Creating shared folders metadata... 00:00:40.658 ==> default: Starting domain. 00:00:42.032 ==> default: Waiting for domain to get an IP address... 00:01:00.116 ==> default: Waiting for SSH to become available... 00:01:00.116 ==> default: Configuring and enabling network interfaces... 00:01:02.799 default: SSH address: 192.168.121.61:22 00:01:02.799 default: SSH username: vagrant 00:01:02.799 default: SSH auth method: private key 00:01:04.700 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:12.819 ==> default: Mounting SSHFS shared folder... 00:01:13.756 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:13.756 ==> default: Checking Mount.. 00:01:15.133 ==> default: Folder Successfully Mounted! 00:01:15.133 ==> default: Running provisioner: file... 00:01:16.074 default: ~/.gitconfig => .gitconfig 00:01:16.333 00:01:16.333 SUCCESS! 00:01:16.333 00:01:16.333 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:16.333 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:16.333 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:16.333 00:01:16.342 [Pipeline] } 00:01:16.361 [Pipeline] // stage 00:01:16.370 [Pipeline] dir 00:01:16.371 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:01:16.373 [Pipeline] { 00:01:16.389 [Pipeline] catchError 00:01:16.390 [Pipeline] { 00:01:16.405 [Pipeline] sh 00:01:16.683 + vagrant ssh-config --host vagrant 00:01:16.683 + sed -ne /^Host/,$p 00:01:16.683 + tee ssh_conf 00:01:19.973 Host vagrant 00:01:19.973 HostName 192.168.121.61 00:01:19.973 User vagrant 00:01:19.973 Port 22 00:01:19.973 UserKnownHostsFile /dev/null 00:01:19.973 StrictHostKeyChecking no 00:01:19.973 PasswordAuthentication no 00:01:19.973 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:19.973 IdentitiesOnly yes 00:01:19.973 LogLevel FATAL 00:01:19.973 ForwardAgent yes 00:01:19.973 ForwardX11 yes 00:01:19.973 00:01:19.985 [Pipeline] withEnv 00:01:19.987 [Pipeline] { 00:01:19.998 [Pipeline] sh 00:01:20.271 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:20.271 source /etc/os-release 00:01:20.271 [[ -e /image.version ]] && img=$(< /image.version) 00:01:20.271 # Minimal, systemd-like check. 00:01:20.271 if [[ -e /.dockerenv ]]; then 00:01:20.271 # Clear garbage from the node's name: 00:01:20.271 # agt-er_autotest_547-896 -> autotest_547-896 00:01:20.271 # $HOSTNAME is the actual container id 00:01:20.271 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:20.271 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:20.271 # We can assume this is a mount from a host where container is running, 00:01:20.271 # so fetch its hostname to easily identify the target swarm worker. 00:01:20.271 container="$(< /etc/hostname) ($agent)" 00:01:20.271 else 00:01:20.271 # Fallback 00:01:20.271 container=$agent 00:01:20.271 fi 00:01:20.271 fi 00:01:20.271 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:20.271 00:01:20.339 [Pipeline] } 00:01:20.357 [Pipeline] // withEnv 00:01:20.363 [Pipeline] setCustomBuildProperty 00:01:20.375 [Pipeline] stage 00:01:20.376 [Pipeline] { (Tests) 00:01:20.391 [Pipeline] sh 00:01:20.682 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:20.952 [Pipeline] sh 00:01:21.230 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:21.505 [Pipeline] timeout 00:01:21.505 Timeout set to expire in 40 min 00:01:21.508 [Pipeline] { 00:01:21.524 [Pipeline] sh 00:01:21.799 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:22.366 HEAD is now at 719d03c6a sock/uring: only register net impl if supported 00:01:22.379 [Pipeline] sh 00:01:22.657 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:22.931 [Pipeline] sh 00:01:23.211 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:23.486 [Pipeline] sh 00:01:23.767 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:23.767 ++ readlink -f spdk_repo 00:01:23.767 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:23.767 + [[ -n /home/vagrant/spdk_repo ]] 00:01:23.767 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:23.767 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:23.767 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:23.767 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:23.767 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:23.767 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:23.767 + cd /home/vagrant/spdk_repo 00:01:23.767 + source /etc/os-release 00:01:23.767 ++ NAME='Fedora Linux' 00:01:23.767 ++ VERSION='38 (Cloud Edition)' 00:01:23.767 ++ ID=fedora 00:01:23.767 ++ VERSION_ID=38 00:01:23.767 ++ VERSION_CODENAME= 00:01:23.767 ++ PLATFORM_ID=platform:f38 00:01:23.767 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:23.767 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:23.767 ++ LOGO=fedora-logo-icon 00:01:23.767 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:23.767 ++ HOME_URL=https://fedoraproject.org/ 00:01:23.767 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:23.767 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:23.767 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:23.767 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:23.767 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:23.767 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:23.767 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:23.767 ++ SUPPORT_END=2024-05-14 00:01:23.767 ++ VARIANT='Cloud Edition' 00:01:23.767 ++ VARIANT_ID=cloud 00:01:23.767 + uname -a 00:01:23.767 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:24.026 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:24.285 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:24.544 Hugepages 00:01:24.544 node hugesize free / total 00:01:24.544 node0 1048576kB 0 / 0 00:01:24.544 node0 2048kB 0 / 0 00:01:24.544 00:01:24.544 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:24.544 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:24.544 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:24.544 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:24.802 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:24.802 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:24.802 + rm -f /tmp/spdk-ld-path 00:01:24.802 + source autorun-spdk.conf 00:01:24.802 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.802 ++ SPDK_TEST_NVME=1 00:01:24.802 ++ SPDK_TEST_FTL=1 00:01:24.802 ++ SPDK_TEST_ISAL=1 00:01:24.802 ++ SPDK_RUN_ASAN=1 00:01:24.802 ++ SPDK_RUN_UBSAN=1 00:01:24.802 ++ SPDK_TEST_XNVME=1 00:01:24.802 ++ SPDK_TEST_NVME_FDP=1 00:01:24.802 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:24.802 ++ RUN_NIGHTLY=1 00:01:24.802 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:24.802 + [[ -n '' ]] 00:01:24.802 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:24.802 + for M in /var/spdk/build-*-manifest.txt 00:01:24.802 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:24.802 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:24.802 + for M in /var/spdk/build-*-manifest.txt 00:01:24.802 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:24.802 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:24.802 ++ uname 00:01:24.802 + [[ Linux == \L\i\n\u\x ]] 00:01:24.802 + sudo dmesg -T 00:01:24.802 + sudo dmesg --clear 00:01:24.802 + dmesg_pid=5188 00:01:24.802 + [[ Fedora Linux == FreeBSD ]] 00:01:24.802 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.802 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.802 + sudo dmesg -Tw 00:01:24.802 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:24.802 + [[ -x /usr/src/fio-static/fio ]] 00:01:24.802 + export FIO_BIN=/usr/src/fio-static/fio 00:01:24.802 + FIO_BIN=/usr/src/fio-static/fio 00:01:24.802 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:24.802 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:24.802 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:24.802 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.802 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.802 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:24.802 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.802 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.802 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:24.802 Test configuration: 00:01:24.802 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.802 SPDK_TEST_NVME=1 00:01:24.802 SPDK_TEST_FTL=1 00:01:24.802 SPDK_TEST_ISAL=1 00:01:24.802 SPDK_RUN_ASAN=1 00:01:24.802 SPDK_RUN_UBSAN=1 00:01:24.802 SPDK_TEST_XNVME=1 00:01:24.802 SPDK_TEST_NVME_FDP=1 00:01:24.802 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.062 RUN_NIGHTLY=1 15:09:38 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:25.062 15:09:38 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:25.062 15:09:38 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:25.062 15:09:38 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:25.062 15:09:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.062 15:09:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.062 15:09:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.062 15:09:38 -- paths/export.sh@5 -- $ export PATH 00:01:25.062 15:09:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.062 15:09:38 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:25.062 15:09:38 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:25.062 15:09:38 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720710578.XXXXXX 00:01:25.062 15:09:38 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720710578.1FefVQ 00:01:25.062 15:09:38 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:25.062 15:09:38 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:25.062 15:09:38 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:25.062 15:09:38 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:25.062 15:09:38 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:25.062 15:09:38 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:25.062 15:09:38 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:25.062 15:09:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.062 15:09:38 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:25.062 15:09:38 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:25.062 15:09:38 -- pm/common@17 -- $ local monitor 00:01:25.062 15:09:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.062 15:09:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.062 15:09:38 -- pm/common@21 -- $ date +%s 00:01:25.062 15:09:38 -- pm/common@25 -- $ sleep 1 00:01:25.062 15:09:38 -- pm/common@21 -- $ date +%s 00:01:25.062 15:09:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720710578 00:01:25.062 15:09:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720710578 00:01:25.062 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720710578_collect-vmstat.pm.log 00:01:25.062 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720710578_collect-cpu-load.pm.log 00:01:26.001 15:09:39 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:26.001 15:09:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:26.001 15:09:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:26.001 15:09:39 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:26.001 15:09:39 -- spdk/autobuild.sh@16 -- $ date -u 00:01:26.001 Thu Jul 11 03:09:39 PM UTC 2024 00:01:26.001 15:09:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:26.001 v24.09-pre-202-g719d03c6a 00:01:26.001 15:09:39 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:26.001 15:09:39 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:26.001 15:09:39 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:26.001 15:09:39 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:26.001 15:09:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.001 ************************************ 00:01:26.001 START TEST asan 00:01:26.001 ************************************ 00:01:26.001 using asan 00:01:26.001 15:09:39 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:26.001 00:01:26.001 real 0m0.000s 00:01:26.001 user 0m0.000s 00:01:26.001 sys 0m0.000s 00:01:26.001 15:09:39 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:26.001 ************************************ 00:01:26.001 END TEST asan 00:01:26.001 ************************************ 00:01:26.001 15:09:39 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:26.001 15:09:39 -- common/autotest_common.sh@1142 -- $ return 0 00:01:26.001 15:09:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:26.001 15:09:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:26.001 15:09:39 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:26.001 15:09:39 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:26.001 15:09:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.001 ************************************ 00:01:26.001 START TEST ubsan 00:01:26.001 ************************************ 00:01:26.001 using ubsan 00:01:26.001 15:09:39 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:26.001 00:01:26.002 real 0m0.000s 00:01:26.002 user 0m0.000s 00:01:26.002 sys 0m0.000s 00:01:26.002 15:09:39 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:26.002 15:09:39 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:26.002 ************************************ 00:01:26.002 END TEST ubsan 00:01:26.002 ************************************ 00:01:26.002 15:09:39 -- common/autotest_common.sh@1142 -- $ return 0 00:01:26.002 15:09:39 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:26.002 15:09:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:26.002 15:09:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:26.002 15:09:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:26.002 15:09:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:26.002 15:09:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:26.002 15:09:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:26.002 15:09:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:26.002 15:09:39 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:26.260 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:26.260 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:26.828 Using 'verbs' RDMA provider 00:01:42.638 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:54.879 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:54.879 Creating mk/config.mk...done. 00:01:54.879 Creating mk/cc.flags.mk...done. 00:01:54.879 Type 'make' to build. 00:01:54.879 15:10:07 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:54.879 15:10:07 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:54.879 15:10:07 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:54.879 15:10:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.880 ************************************ 00:01:54.880 START TEST make 00:01:54.880 ************************************ 00:01:54.880 15:10:07 make -- common/autotest_common.sh@1123 -- $ make -j10 00:01:54.880 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:01:54.880 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:01:54.880 meson setup builddir \ 00:01:54.880 -Dwith-libaio=enabled \ 00:01:54.880 -Dwith-liburing=enabled \ 00:01:54.880 -Dwith-libvfn=disabled \ 00:01:54.880 -Dwith-spdk=false && \ 00:01:54.880 meson compile -C builddir && \ 00:01:54.880 cd -) 00:01:54.880 make[1]: Nothing to be done for 'all'. 00:01:56.783 The Meson build system 00:01:56.783 Version: 1.3.1 00:01:56.783 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:01:56.783 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:56.783 Build type: native build 00:01:56.783 Project name: xnvme 00:01:56.783 Project version: 0.7.3 00:01:56.783 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:56.783 C linker for the host machine: cc ld.bfd 2.39-16 00:01:56.783 Host machine cpu family: x86_64 00:01:56.783 Host machine cpu: x86_64 00:01:56.783 Message: host_machine.system: linux 00:01:56.783 Compiler for C supports arguments -Wno-missing-braces: YES 00:01:56.783 Compiler for C supports arguments -Wno-cast-function-type: YES 00:01:56.783 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:56.783 Run-time dependency threads found: YES 00:01:56.783 Has header "setupapi.h" : NO 00:01:56.783 Has header "linux/blkzoned.h" : YES 00:01:56.783 Has header "linux/blkzoned.h" : YES (cached) 00:01:56.783 Has header "libaio.h" : YES 00:01:56.783 Library aio found: YES 00:01:56.783 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:56.783 Run-time dependency liburing found: YES 2.2 00:01:56.783 Dependency libvfn skipped: feature with-libvfn disabled 00:01:56.783 Run-time dependency appleframeworks found: NO (tried framework) 00:01:56.783 Run-time dependency appleframeworks found: NO (tried framework) 00:01:56.783 Configuring xnvme_config.h using configuration 00:01:56.783 Configuring xnvme.spec using configuration 00:01:56.783 Run-time dependency bash-completion found: YES 2.11 00:01:56.783 Message: Bash-completions: /usr/share/bash-completion/completions 00:01:56.783 Program cp found: YES (/usr/bin/cp) 00:01:56.783 Has header "winsock2.h" : NO 00:01:56.783 Has header "dbghelp.h" : NO 00:01:56.783 Library rpcrt4 found: NO 00:01:56.783 Library rt found: YES 00:01:56.783 Checking for function "clock_gettime" with dependency -lrt: YES 00:01:56.783 Found CMake: /usr/bin/cmake (3.27.7) 00:01:56.783 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:01:56.783 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:01:56.783 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:01:56.783 Build targets in project: 32 00:01:56.783 00:01:56.783 xnvme 0.7.3 00:01:56.783 00:01:56.783 User defined options 00:01:56.783 with-libaio : enabled 00:01:56.783 with-liburing: enabled 00:01:56.783 with-libvfn : disabled 00:01:56.783 with-spdk : false 00:01:56.783 00:01:56.784 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:57.350 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:01:57.350 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:01:57.350 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:01:57.350 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:01:57.350 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:01:57.350 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:01:57.350 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:01:57.350 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:01:57.350 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:01:57.350 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:01:57.350 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:01:57.350 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:01:57.350 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:01:57.350 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:01:57.350 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:01:57.350 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:01:57.609 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:01:57.609 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:01:57.609 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:01:57.609 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:01:57.609 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:01:57.609 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:01:57.609 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:01:57.609 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:01:57.609 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:01:57.609 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:01:57.609 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:01:57.609 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:01:57.609 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:01:57.609 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:01:57.609 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:01:57.609 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:01:57.609 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:01:57.609 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:01:57.609 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:01:57.609 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:01:57.609 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:01:57.609 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:01:57.609 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:01:57.609 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:01:57.609 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:01:57.609 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:01:57.609 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:01:57.867 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:01:57.867 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:01:57.867 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:01:57.867 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:01:57.867 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:01:57.867 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:01:57.867 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:01:57.867 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:01:57.867 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:01:57.867 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:01:57.867 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:01:57.867 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:01:57.867 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:01:57.867 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:01:57.867 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:01:57.867 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:01:57.867 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:01:57.867 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:01:57.867 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:01:57.867 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:01:57.867 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:01:58.124 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:01:58.124 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:01:58.124 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:01:58.124 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:01:58.124 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:01:58.124 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:01:58.124 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:01:58.124 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:01:58.124 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:01:58.124 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:01:58.124 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:01:58.124 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:01:58.124 [76/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:01:58.124 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:01:58.124 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:01:58.124 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:01:58.124 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:01:58.124 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:01:58.381 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:01:58.381 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:01:58.381 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:01:58.381 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:01:58.381 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:01:58.381 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:01:58.381 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:01:58.381 [89/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:01:58.381 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:01:58.381 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:01:58.381 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:01:58.381 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:01:58.381 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:01:58.381 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:01:58.639 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:01:58.639 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:01:58.639 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:01:58.639 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:01:58.639 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:01:58.639 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:01:58.639 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:01:58.639 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:01:58.639 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:01:58.639 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:01:58.639 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:01:58.639 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:01:58.639 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:01:58.639 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:01:58.639 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:01:58.639 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:01:58.639 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:01:58.639 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:01:58.639 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:01:58.639 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:01:58.639 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:01:58.639 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:01:58.639 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:01:58.639 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:01:58.639 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:01:58.639 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:01:58.639 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:01:58.639 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:01:58.639 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:01:58.639 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:01:58.897 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:01:58.897 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:01:58.897 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:01:58.897 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:01:58.897 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:01:58.897 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:01:58.897 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:01:58.897 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:01:58.897 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:01:58.897 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:01:58.897 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:01:58.897 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:01:58.897 [138/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:01:58.897 [139/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:01:58.897 [140/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:01:58.897 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:01:58.897 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:01:58.897 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:01:59.155 [144/203] Linking target lib/libxnvme.so 00:01:59.155 [145/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:01:59.155 [146/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:01:59.155 [147/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:01:59.155 [148/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:01:59.155 [149/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:01:59.155 [150/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:01:59.155 [151/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:01:59.155 [152/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:01:59.155 [153/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:01:59.155 [154/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:01:59.155 [155/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:01:59.155 [156/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:01:59.413 [157/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:01:59.413 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:01:59.413 [159/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:01:59.413 [160/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:01:59.413 [161/203] Compiling C object tools/xdd.p/xdd.c.o 00:01:59.413 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:01:59.413 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:01:59.413 [164/203] Compiling C object tools/kvs.p/kvs.c.o 00:01:59.413 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:01:59.413 [166/203] Compiling C object tools/zoned.p/zoned.c.o 00:01:59.413 [167/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:01:59.413 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:01:59.673 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:01:59.673 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:01:59.673 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:01:59.673 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:01:59.673 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:01:59.673 [174/203] Linking static target lib/libxnvme.a 00:01:59.673 [175/203] Linking target tests/xnvme_tests_async_intf 00:01:59.673 [176/203] Linking target tests/xnvme_tests_buf 00:01:59.930 [177/203] Linking target tests/xnvme_tests_xnvme_cli 00:01:59.930 [178/203] Linking target tests/xnvme_tests_znd_explicit_open 00:01:59.930 [179/203] Linking target tests/xnvme_tests_enum 00:01:59.930 [180/203] Linking target tests/xnvme_tests_lblk 00:01:59.930 [181/203] Linking target tests/xnvme_tests_scc 00:01:59.930 [182/203] Linking target tests/xnvme_tests_cli 00:01:59.930 [183/203] Linking target tests/xnvme_tests_kvs 00:01:59.930 [184/203] Linking target tests/xnvme_tests_znd_append 00:01:59.930 [185/203] Linking target tests/xnvme_tests_znd_state 00:01:59.930 [186/203] Linking target tests/xnvme_tests_znd_zrwa 00:01:59.930 [187/203] Linking target tests/xnvme_tests_ioworker 00:01:59.930 [188/203] Linking target tests/xnvme_tests_xnvme_file 00:01:59.930 [189/203] Linking target tools/lblk 00:01:59.930 [190/203] Linking target tests/xnvme_tests_map 00:01:59.930 [191/203] Linking target tools/xnvme 00:01:59.930 [192/203] Linking target tools/xnvme_file 00:01:59.930 [193/203] Linking target tools/xdd 00:01:59.930 [194/203] Linking target tools/kvs 00:01:59.930 [195/203] Linking target examples/xnvme_enum 00:01:59.930 [196/203] Linking target examples/xnvme_hello 00:01:59.930 [197/203] Linking target examples/xnvme_single_sync 00:01:59.930 [198/203] Linking target examples/xnvme_single_async 00:01:59.930 [199/203] Linking target tools/zoned 00:01:59.930 [200/203] Linking target examples/zoned_io_sync 00:01:59.930 [201/203] Linking target examples/xnvme_dev 00:01:59.930 [202/203] Linking target examples/zoned_io_async 00:01:59.930 [203/203] Linking target examples/xnvme_io_async 00:01:59.930 INFO: autodetecting backend as ninja 00:01:59.930 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:01:59.930 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:05.222 The Meson build system 00:02:05.222 Version: 1.3.1 00:02:05.222 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:05.222 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:05.222 Build type: native build 00:02:05.222 Program cat found: YES (/usr/bin/cat) 00:02:05.222 Project name: DPDK 00:02:05.222 Project version: 24.03.0 00:02:05.222 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:05.222 C linker for the host machine: cc ld.bfd 2.39-16 00:02:05.222 Host machine cpu family: x86_64 00:02:05.222 Host machine cpu: x86_64 00:02:05.222 Message: ## Building in Developer Mode ## 00:02:05.222 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:05.222 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:05.222 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:05.222 Program python3 found: YES (/usr/bin/python3) 00:02:05.222 Program cat found: YES (/usr/bin/cat) 00:02:05.222 Compiler for C supports arguments -march=native: YES 00:02:05.222 Checking for size of "void *" : 8 00:02:05.222 Checking for size of "void *" : 8 (cached) 00:02:05.222 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:05.222 Library m found: YES 00:02:05.222 Library numa found: YES 00:02:05.222 Has header "numaif.h" : YES 00:02:05.222 Library fdt found: NO 00:02:05.222 Library execinfo found: NO 00:02:05.222 Has header "execinfo.h" : YES 00:02:05.222 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:05.222 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:05.222 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:05.222 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:05.222 Run-time dependency openssl found: YES 3.0.9 00:02:05.222 Run-time dependency libpcap found: YES 1.10.4 00:02:05.222 Has header "pcap.h" with dependency libpcap: YES 00:02:05.222 Compiler for C supports arguments -Wcast-qual: YES 00:02:05.222 Compiler for C supports arguments -Wdeprecated: YES 00:02:05.222 Compiler for C supports arguments -Wformat: YES 00:02:05.222 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:05.222 Compiler for C supports arguments -Wformat-security: NO 00:02:05.222 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:05.222 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:05.222 Compiler for C supports arguments -Wnested-externs: YES 00:02:05.222 Compiler for C supports arguments -Wold-style-definition: YES 00:02:05.222 Compiler for C supports arguments -Wpointer-arith: YES 00:02:05.222 Compiler for C supports arguments -Wsign-compare: YES 00:02:05.222 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:05.222 Compiler for C supports arguments -Wundef: YES 00:02:05.222 Compiler for C supports arguments -Wwrite-strings: YES 00:02:05.222 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:05.222 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:05.222 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:05.222 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:05.222 Program objdump found: YES (/usr/bin/objdump) 00:02:05.222 Compiler for C supports arguments -mavx512f: YES 00:02:05.222 Checking if "AVX512 checking" compiles: YES 00:02:05.222 Fetching value of define "__SSE4_2__" : 1 00:02:05.222 Fetching value of define "__AES__" : 1 00:02:05.222 Fetching value of define "__AVX__" : 1 00:02:05.222 Fetching value of define "__AVX2__" : 1 00:02:05.222 Fetching value of define "__AVX512BW__" : (undefined) 00:02:05.222 Fetching value of define "__AVX512CD__" : (undefined) 00:02:05.222 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:05.222 Fetching value of define "__AVX512F__" : (undefined) 00:02:05.222 Fetching value of define "__AVX512VL__" : (undefined) 00:02:05.222 Fetching value of define "__PCLMUL__" : 1 00:02:05.222 Fetching value of define "__RDRND__" : 1 00:02:05.222 Fetching value of define "__RDSEED__" : 1 00:02:05.222 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:05.222 Fetching value of define "__znver1__" : (undefined) 00:02:05.222 Fetching value of define "__znver2__" : (undefined) 00:02:05.222 Fetching value of define "__znver3__" : (undefined) 00:02:05.222 Fetching value of define "__znver4__" : (undefined) 00:02:05.222 Library asan found: YES 00:02:05.222 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:05.222 Message: lib/log: Defining dependency "log" 00:02:05.222 Message: lib/kvargs: Defining dependency "kvargs" 00:02:05.222 Message: lib/telemetry: Defining dependency "telemetry" 00:02:05.222 Library rt found: YES 00:02:05.222 Checking for function "getentropy" : NO 00:02:05.222 Message: lib/eal: Defining dependency "eal" 00:02:05.222 Message: lib/ring: Defining dependency "ring" 00:02:05.222 Message: lib/rcu: Defining dependency "rcu" 00:02:05.222 Message: lib/mempool: Defining dependency "mempool" 00:02:05.222 Message: lib/mbuf: Defining dependency "mbuf" 00:02:05.222 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:05.222 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:05.222 Compiler for C supports arguments -mpclmul: YES 00:02:05.222 Compiler for C supports arguments -maes: YES 00:02:05.222 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:05.222 Compiler for C supports arguments -mavx512bw: YES 00:02:05.222 Compiler for C supports arguments -mavx512dq: YES 00:02:05.222 Compiler for C supports arguments -mavx512vl: YES 00:02:05.222 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:05.222 Compiler for C supports arguments -mavx2: YES 00:02:05.222 Compiler for C supports arguments -mavx: YES 00:02:05.222 Message: lib/net: Defining dependency "net" 00:02:05.222 Message: lib/meter: Defining dependency "meter" 00:02:05.222 Message: lib/ethdev: Defining dependency "ethdev" 00:02:05.222 Message: lib/pci: Defining dependency "pci" 00:02:05.222 Message: lib/cmdline: Defining dependency "cmdline" 00:02:05.222 Message: lib/hash: Defining dependency "hash" 00:02:05.222 Message: lib/timer: Defining dependency "timer" 00:02:05.222 Message: lib/compressdev: Defining dependency "compressdev" 00:02:05.222 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:05.222 Message: lib/dmadev: Defining dependency "dmadev" 00:02:05.222 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:05.222 Message: lib/power: Defining dependency "power" 00:02:05.222 Message: lib/reorder: Defining dependency "reorder" 00:02:05.222 Message: lib/security: Defining dependency "security" 00:02:05.222 Has header "linux/userfaultfd.h" : YES 00:02:05.222 Has header "linux/vduse.h" : YES 00:02:05.222 Message: lib/vhost: Defining dependency "vhost" 00:02:05.222 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:05.222 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:05.222 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:05.222 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:05.222 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:05.222 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:05.222 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:05.222 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:05.222 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:05.222 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:05.223 Program doxygen found: YES (/usr/bin/doxygen) 00:02:05.223 Configuring doxy-api-html.conf using configuration 00:02:05.223 Configuring doxy-api-man.conf using configuration 00:02:05.223 Program mandb found: YES (/usr/bin/mandb) 00:02:05.223 Program sphinx-build found: NO 00:02:05.223 Configuring rte_build_config.h using configuration 00:02:05.223 Message: 00:02:05.223 ================= 00:02:05.223 Applications Enabled 00:02:05.223 ================= 00:02:05.223 00:02:05.223 apps: 00:02:05.223 00:02:05.223 00:02:05.223 Message: 00:02:05.223 ================= 00:02:05.223 Libraries Enabled 00:02:05.223 ================= 00:02:05.223 00:02:05.223 libs: 00:02:05.223 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:05.223 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:05.223 cryptodev, dmadev, power, reorder, security, vhost, 00:02:05.223 00:02:05.223 Message: 00:02:05.223 =============== 00:02:05.223 Drivers Enabled 00:02:05.223 =============== 00:02:05.223 00:02:05.223 common: 00:02:05.223 00:02:05.223 bus: 00:02:05.223 pci, vdev, 00:02:05.223 mempool: 00:02:05.223 ring, 00:02:05.223 dma: 00:02:05.223 00:02:05.223 net: 00:02:05.223 00:02:05.223 crypto: 00:02:05.223 00:02:05.223 compress: 00:02:05.223 00:02:05.223 vdpa: 00:02:05.223 00:02:05.223 00:02:05.223 Message: 00:02:05.223 ================= 00:02:05.223 Content Skipped 00:02:05.223 ================= 00:02:05.223 00:02:05.223 apps: 00:02:05.223 dumpcap: explicitly disabled via build config 00:02:05.223 graph: explicitly disabled via build config 00:02:05.223 pdump: explicitly disabled via build config 00:02:05.223 proc-info: explicitly disabled via build config 00:02:05.223 test-acl: explicitly disabled via build config 00:02:05.223 test-bbdev: explicitly disabled via build config 00:02:05.223 test-cmdline: explicitly disabled via build config 00:02:05.223 test-compress-perf: explicitly disabled via build config 00:02:05.223 test-crypto-perf: explicitly disabled via build config 00:02:05.223 test-dma-perf: explicitly disabled via build config 00:02:05.223 test-eventdev: explicitly disabled via build config 00:02:05.223 test-fib: explicitly disabled via build config 00:02:05.223 test-flow-perf: explicitly disabled via build config 00:02:05.223 test-gpudev: explicitly disabled via build config 00:02:05.223 test-mldev: explicitly disabled via build config 00:02:05.223 test-pipeline: explicitly disabled via build config 00:02:05.223 test-pmd: explicitly disabled via build config 00:02:05.223 test-regex: explicitly disabled via build config 00:02:05.223 test-sad: explicitly disabled via build config 00:02:05.223 test-security-perf: explicitly disabled via build config 00:02:05.223 00:02:05.223 libs: 00:02:05.223 argparse: explicitly disabled via build config 00:02:05.223 metrics: explicitly disabled via build config 00:02:05.223 acl: explicitly disabled via build config 00:02:05.223 bbdev: explicitly disabled via build config 00:02:05.223 bitratestats: explicitly disabled via build config 00:02:05.223 bpf: explicitly disabled via build config 00:02:05.223 cfgfile: explicitly disabled via build config 00:02:05.223 distributor: explicitly disabled via build config 00:02:05.223 efd: explicitly disabled via build config 00:02:05.223 eventdev: explicitly disabled via build config 00:02:05.223 dispatcher: explicitly disabled via build config 00:02:05.223 gpudev: explicitly disabled via build config 00:02:05.223 gro: explicitly disabled via build config 00:02:05.223 gso: explicitly disabled via build config 00:02:05.223 ip_frag: explicitly disabled via build config 00:02:05.223 jobstats: explicitly disabled via build config 00:02:05.223 latencystats: explicitly disabled via build config 00:02:05.223 lpm: explicitly disabled via build config 00:02:05.223 member: explicitly disabled via build config 00:02:05.223 pcapng: explicitly disabled via build config 00:02:05.223 rawdev: explicitly disabled via build config 00:02:05.223 regexdev: explicitly disabled via build config 00:02:05.223 mldev: explicitly disabled via build config 00:02:05.223 rib: explicitly disabled via build config 00:02:05.223 sched: explicitly disabled via build config 00:02:05.223 stack: explicitly disabled via build config 00:02:05.223 ipsec: explicitly disabled via build config 00:02:05.223 pdcp: explicitly disabled via build config 00:02:05.223 fib: explicitly disabled via build config 00:02:05.223 port: explicitly disabled via build config 00:02:05.223 pdump: explicitly disabled via build config 00:02:05.223 table: explicitly disabled via build config 00:02:05.223 pipeline: explicitly disabled via build config 00:02:05.223 graph: explicitly disabled via build config 00:02:05.223 node: explicitly disabled via build config 00:02:05.223 00:02:05.223 drivers: 00:02:05.223 common/cpt: not in enabled drivers build config 00:02:05.223 common/dpaax: not in enabled drivers build config 00:02:05.223 common/iavf: not in enabled drivers build config 00:02:05.223 common/idpf: not in enabled drivers build config 00:02:05.223 common/ionic: not in enabled drivers build config 00:02:05.223 common/mvep: not in enabled drivers build config 00:02:05.223 common/octeontx: not in enabled drivers build config 00:02:05.223 bus/auxiliary: not in enabled drivers build config 00:02:05.223 bus/cdx: not in enabled drivers build config 00:02:05.223 bus/dpaa: not in enabled drivers build config 00:02:05.223 bus/fslmc: not in enabled drivers build config 00:02:05.223 bus/ifpga: not in enabled drivers build config 00:02:05.223 bus/platform: not in enabled drivers build config 00:02:05.223 bus/uacce: not in enabled drivers build config 00:02:05.223 bus/vmbus: not in enabled drivers build config 00:02:05.223 common/cnxk: not in enabled drivers build config 00:02:05.223 common/mlx5: not in enabled drivers build config 00:02:05.223 common/nfp: not in enabled drivers build config 00:02:05.223 common/nitrox: not in enabled drivers build config 00:02:05.223 common/qat: not in enabled drivers build config 00:02:05.223 common/sfc_efx: not in enabled drivers build config 00:02:05.223 mempool/bucket: not in enabled drivers build config 00:02:05.223 mempool/cnxk: not in enabled drivers build config 00:02:05.223 mempool/dpaa: not in enabled drivers build config 00:02:05.223 mempool/dpaa2: not in enabled drivers build config 00:02:05.223 mempool/octeontx: not in enabled drivers build config 00:02:05.223 mempool/stack: not in enabled drivers build config 00:02:05.223 dma/cnxk: not in enabled drivers build config 00:02:05.223 dma/dpaa: not in enabled drivers build config 00:02:05.223 dma/dpaa2: not in enabled drivers build config 00:02:05.223 dma/hisilicon: not in enabled drivers build config 00:02:05.223 dma/idxd: not in enabled drivers build config 00:02:05.223 dma/ioat: not in enabled drivers build config 00:02:05.223 dma/skeleton: not in enabled drivers build config 00:02:05.223 net/af_packet: not in enabled drivers build config 00:02:05.223 net/af_xdp: not in enabled drivers build config 00:02:05.223 net/ark: not in enabled drivers build config 00:02:05.223 net/atlantic: not in enabled drivers build config 00:02:05.223 net/avp: not in enabled drivers build config 00:02:05.223 net/axgbe: not in enabled drivers build config 00:02:05.223 net/bnx2x: not in enabled drivers build config 00:02:05.223 net/bnxt: not in enabled drivers build config 00:02:05.223 net/bonding: not in enabled drivers build config 00:02:05.223 net/cnxk: not in enabled drivers build config 00:02:05.223 net/cpfl: not in enabled drivers build config 00:02:05.223 net/cxgbe: not in enabled drivers build config 00:02:05.223 net/dpaa: not in enabled drivers build config 00:02:05.223 net/dpaa2: not in enabled drivers build config 00:02:05.223 net/e1000: not in enabled drivers build config 00:02:05.223 net/ena: not in enabled drivers build config 00:02:05.223 net/enetc: not in enabled drivers build config 00:02:05.223 net/enetfec: not in enabled drivers build config 00:02:05.223 net/enic: not in enabled drivers build config 00:02:05.223 net/failsafe: not in enabled drivers build config 00:02:05.223 net/fm10k: not in enabled drivers build config 00:02:05.223 net/gve: not in enabled drivers build config 00:02:05.223 net/hinic: not in enabled drivers build config 00:02:05.223 net/hns3: not in enabled drivers build config 00:02:05.223 net/i40e: not in enabled drivers build config 00:02:05.223 net/iavf: not in enabled drivers build config 00:02:05.223 net/ice: not in enabled drivers build config 00:02:05.223 net/idpf: not in enabled drivers build config 00:02:05.223 net/igc: not in enabled drivers build config 00:02:05.223 net/ionic: not in enabled drivers build config 00:02:05.223 net/ipn3ke: not in enabled drivers build config 00:02:05.223 net/ixgbe: not in enabled drivers build config 00:02:05.223 net/mana: not in enabled drivers build config 00:02:05.223 net/memif: not in enabled drivers build config 00:02:05.223 net/mlx4: not in enabled drivers build config 00:02:05.223 net/mlx5: not in enabled drivers build config 00:02:05.223 net/mvneta: not in enabled drivers build config 00:02:05.223 net/mvpp2: not in enabled drivers build config 00:02:05.223 net/netvsc: not in enabled drivers build config 00:02:05.223 net/nfb: not in enabled drivers build config 00:02:05.223 net/nfp: not in enabled drivers build config 00:02:05.223 net/ngbe: not in enabled drivers build config 00:02:05.223 net/null: not in enabled drivers build config 00:02:05.223 net/octeontx: not in enabled drivers build config 00:02:05.223 net/octeon_ep: not in enabled drivers build config 00:02:05.223 net/pcap: not in enabled drivers build config 00:02:05.223 net/pfe: not in enabled drivers build config 00:02:05.223 net/qede: not in enabled drivers build config 00:02:05.223 net/ring: not in enabled drivers build config 00:02:05.223 net/sfc: not in enabled drivers build config 00:02:05.223 net/softnic: not in enabled drivers build config 00:02:05.223 net/tap: not in enabled drivers build config 00:02:05.223 net/thunderx: not in enabled drivers build config 00:02:05.223 net/txgbe: not in enabled drivers build config 00:02:05.223 net/vdev_netvsc: not in enabled drivers build config 00:02:05.223 net/vhost: not in enabled drivers build config 00:02:05.223 net/virtio: not in enabled drivers build config 00:02:05.223 net/vmxnet3: not in enabled drivers build config 00:02:05.223 raw/*: missing internal dependency, "rawdev" 00:02:05.223 crypto/armv8: not in enabled drivers build config 00:02:05.223 crypto/bcmfs: not in enabled drivers build config 00:02:05.223 crypto/caam_jr: not in enabled drivers build config 00:02:05.223 crypto/ccp: not in enabled drivers build config 00:02:05.223 crypto/cnxk: not in enabled drivers build config 00:02:05.223 crypto/dpaa_sec: not in enabled drivers build config 00:02:05.223 crypto/dpaa2_sec: not in enabled drivers build config 00:02:05.223 crypto/ipsec_mb: not in enabled drivers build config 00:02:05.223 crypto/mlx5: not in enabled drivers build config 00:02:05.223 crypto/mvsam: not in enabled drivers build config 00:02:05.223 crypto/nitrox: not in enabled drivers build config 00:02:05.223 crypto/null: not in enabled drivers build config 00:02:05.223 crypto/octeontx: not in enabled drivers build config 00:02:05.223 crypto/openssl: not in enabled drivers build config 00:02:05.224 crypto/scheduler: not in enabled drivers build config 00:02:05.224 crypto/uadk: not in enabled drivers build config 00:02:05.224 crypto/virtio: not in enabled drivers build config 00:02:05.224 compress/isal: not in enabled drivers build config 00:02:05.224 compress/mlx5: not in enabled drivers build config 00:02:05.224 compress/nitrox: not in enabled drivers build config 00:02:05.224 compress/octeontx: not in enabled drivers build config 00:02:05.224 compress/zlib: not in enabled drivers build config 00:02:05.224 regex/*: missing internal dependency, "regexdev" 00:02:05.224 ml/*: missing internal dependency, "mldev" 00:02:05.224 vdpa/ifc: not in enabled drivers build config 00:02:05.224 vdpa/mlx5: not in enabled drivers build config 00:02:05.224 vdpa/nfp: not in enabled drivers build config 00:02:05.224 vdpa/sfc: not in enabled drivers build config 00:02:05.224 event/*: missing internal dependency, "eventdev" 00:02:05.224 baseband/*: missing internal dependency, "bbdev" 00:02:05.224 gpu/*: missing internal dependency, "gpudev" 00:02:05.224 00:02:05.224 00:02:05.483 Build targets in project: 85 00:02:05.483 00:02:05.483 DPDK 24.03.0 00:02:05.483 00:02:05.483 User defined options 00:02:05.483 buildtype : debug 00:02:05.483 default_library : shared 00:02:05.483 libdir : lib 00:02:05.483 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:05.483 b_sanitize : address 00:02:05.483 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:05.483 c_link_args : 00:02:05.483 cpu_instruction_set: native 00:02:05.483 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:05.483 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:05.483 enable_docs : false 00:02:05.483 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:05.483 enable_kmods : false 00:02:05.483 max_lcores : 128 00:02:05.483 tests : false 00:02:05.483 00:02:05.483 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:05.742 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:06.002 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:06.002 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:06.002 [3/268] Linking static target lib/librte_kvargs.a 00:02:06.002 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:06.002 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:06.002 [6/268] Linking static target lib/librte_log.a 00:02:06.570 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.570 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:06.830 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:06.830 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:06.830 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:06.830 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:06.830 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:06.830 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:06.830 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:06.830 [16/268] Linking static target lib/librte_telemetry.a 00:02:06.830 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:07.090 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.090 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:07.090 [20/268] Linking target lib/librte_log.so.24.1 00:02:07.349 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:07.349 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:07.608 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:07.608 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:07.867 [25/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.867 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:07.867 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:07.867 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:07.867 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:07.867 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:07.867 [31/268] Linking target lib/librte_telemetry.so.24.1 00:02:07.867 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:07.867 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:07.867 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:08.126 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:08.126 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:08.126 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:08.695 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:08.695 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:08.695 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:08.954 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:08.954 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:08.954 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:08.954 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:08.954 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:08.954 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:09.212 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:09.212 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:09.212 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:09.212 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:09.212 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:09.779 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:09.779 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:09.779 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:10.037 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:10.037 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:10.037 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:10.037 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:10.037 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:10.037 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:10.294 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:10.294 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:10.552 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:10.552 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:10.552 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:10.811 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:11.068 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:11.068 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:11.068 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:11.326 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:11.326 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:11.326 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:11.326 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:11.585 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:11.585 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:11.585 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:11.843 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:11.843 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:11.843 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:11.843 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:12.101 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:12.101 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:12.360 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:12.360 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:12.618 [85/268] Linking static target lib/librte_eal.a 00:02:12.618 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:12.618 [87/268] Linking static target lib/librte_ring.a 00:02:12.876 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:12.876 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:12.876 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:12.876 [91/268] Linking static target lib/librte_rcu.a 00:02:12.876 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:12.876 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:13.134 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.134 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:13.392 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:13.392 [97/268] Linking static target lib/librte_mempool.a 00:02:13.392 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.957 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:13.957 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:13.957 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:13.957 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:13.957 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:13.957 [104/268] Linking static target lib/librte_mbuf.a 00:02:13.957 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:14.523 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:14.523 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:14.523 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:14.523 [109/268] Linking static target lib/librte_net.a 00:02:14.782 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.782 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:14.782 [112/268] Linking static target lib/librte_meter.a 00:02:14.782 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:15.041 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.041 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:15.041 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:15.041 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.300 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.558 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:16.124 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:16.124 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:16.124 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:16.124 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:16.382 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:16.382 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:16.382 [126/268] Linking static target lib/librte_pci.a 00:02:16.640 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:16.640 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:16.640 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:16.898 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:16.898 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:16.898 [132/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.898 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:16.898 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:16.898 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:16.898 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:16.898 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:16.898 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:17.157 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:17.157 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:17.157 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:17.157 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:17.157 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:17.157 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:17.415 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:17.415 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:17.415 [147/268] Linking static target lib/librte_cmdline.a 00:02:17.673 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:17.673 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:18.239 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:18.239 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:18.239 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:18.239 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:18.239 [154/268] Linking static target lib/librte_timer.a 00:02:18.239 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:18.239 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:18.239 [157/268] Linking static target lib/librte_ethdev.a 00:02:18.806 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:18.806 [159/268] Linking static target lib/librte_compressdev.a 00:02:18.806 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.806 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:18.806 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:19.064 [163/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.064 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:19.064 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:19.064 [166/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:19.322 [167/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:19.322 [168/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:19.322 [169/268] Linking static target lib/librte_hash.a 00:02:19.579 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:19.579 [171/268] Linking static target lib/librte_dmadev.a 00:02:19.579 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:19.579 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.579 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:19.579 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:20.143 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:20.143 [177/268] Linking static target lib/librte_cryptodev.a 00:02:20.143 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:20.143 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:20.401 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:20.401 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.401 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:20.401 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:20.401 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.401 [185/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:20.401 [186/268] Linking static target lib/librte_power.a 00:02:21.335 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:21.336 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:21.336 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:21.336 [190/268] Linking static target lib/librte_reorder.a 00:02:21.336 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:21.336 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:21.336 [193/268] Linking static target lib/librte_security.a 00:02:21.596 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:21.596 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.873 [196/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.873 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.170 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:22.435 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:22.435 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:22.435 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:22.435 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.435 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:22.435 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:22.694 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:22.952 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:22.952 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:22.952 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:22.952 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:22.952 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:22.952 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:23.211 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:23.211 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.211 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.211 [215/268] Linking static target drivers/librte_bus_vdev.a 00:02:23.211 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:23.211 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:23.211 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.211 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.211 [220/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:23.211 [221/268] Linking static target drivers/librte_bus_pci.a 00:02:23.470 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:23.470 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.470 [224/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.470 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.470 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:23.729 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.297 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.297 [229/268] Linking target lib/librte_eal.so.24.1 00:02:24.297 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:24.297 [231/268] Linking target lib/librte_timer.so.24.1 00:02:24.297 [232/268] Linking target lib/librte_meter.so.24.1 00:02:24.297 [233/268] Linking target lib/librte_pci.so.24.1 00:02:24.556 [234/268] Linking target lib/librte_dmadev.so.24.1 00:02:24.556 [235/268] Linking target lib/librte_ring.so.24.1 00:02:24.556 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:24.556 [237/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:24.556 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:24.556 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:24.556 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:24.556 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:24.556 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:24.556 [243/268] Linking target lib/librte_mempool.so.24.1 00:02:24.556 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:24.815 [245/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:24.815 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:24.815 [247/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:24.815 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:24.815 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:25.074 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:25.074 [251/268] Linking target lib/librte_reorder.so.24.1 00:02:25.074 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:25.074 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:25.074 [254/268] Linking target lib/librte_net.so.24.1 00:02:25.074 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:25.333 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:25.333 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:25.333 [258/268] Linking target lib/librte_security.so.24.1 00:02:25.333 [259/268] Linking target lib/librte_hash.so.24.1 00:02:25.592 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:26.161 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.161 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:26.420 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:26.420 [264/268] Linking target lib/librte_power.so.24.1 00:02:28.324 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:28.583 [266/268] Linking static target lib/librte_vhost.a 00:02:30.496 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.496 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:30.496 INFO: autodetecting backend as ninja 00:02:30.496 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:31.445 CC lib/ut/ut.o 00:02:31.445 CC lib/log/log_deprecated.o 00:02:31.445 CC lib/log/log_flags.o 00:02:31.445 CC lib/log/log.o 00:02:31.445 CC lib/ut_mock/mock.o 00:02:31.703 LIB libspdk_log.a 00:02:31.703 LIB libspdk_ut_mock.a 00:02:31.703 LIB libspdk_ut.a 00:02:31.703 SO libspdk_log.so.7.0 00:02:31.703 SO libspdk_ut_mock.so.6.0 00:02:31.703 SO libspdk_ut.so.2.0 00:02:31.703 SYMLINK libspdk_ut_mock.so 00:02:31.703 SYMLINK libspdk_ut.so 00:02:31.703 SYMLINK libspdk_log.so 00:02:31.960 CC lib/dma/dma.o 00:02:31.960 CC lib/ioat/ioat.o 00:02:31.960 CC lib/util/base64.o 00:02:31.960 CC lib/util/bit_array.o 00:02:31.960 CXX lib/trace_parser/trace.o 00:02:31.960 CC lib/util/cpuset.o 00:02:31.960 CC lib/util/crc16.o 00:02:31.960 CC lib/util/crc32.o 00:02:31.960 CC lib/util/crc32c.o 00:02:32.218 CC lib/vfio_user/host/vfio_user_pci.o 00:02:32.218 CC lib/util/crc32_ieee.o 00:02:32.218 CC lib/util/crc64.o 00:02:32.218 CC lib/util/dif.o 00:02:32.218 LIB libspdk_dma.a 00:02:32.218 CC lib/util/fd.o 00:02:32.218 CC lib/util/file.o 00:02:32.218 SO libspdk_dma.so.4.0 00:02:32.218 CC lib/util/hexlify.o 00:02:32.475 CC lib/util/iov.o 00:02:32.475 SYMLINK libspdk_dma.so 00:02:32.475 CC lib/util/math.o 00:02:32.475 CC lib/vfio_user/host/vfio_user.o 00:02:32.475 LIB libspdk_ioat.a 00:02:32.475 CC lib/util/pipe.o 00:02:32.475 CC lib/util/strerror_tls.o 00:02:32.475 SO libspdk_ioat.so.7.0 00:02:32.475 CC lib/util/string.o 00:02:32.476 SYMLINK libspdk_ioat.so 00:02:32.476 CC lib/util/uuid.o 00:02:32.476 CC lib/util/fd_group.o 00:02:32.476 CC lib/util/xor.o 00:02:32.476 CC lib/util/zipf.o 00:02:32.733 LIB libspdk_vfio_user.a 00:02:32.733 SO libspdk_vfio_user.so.5.0 00:02:32.733 SYMLINK libspdk_vfio_user.so 00:02:32.991 LIB libspdk_util.a 00:02:33.248 SO libspdk_util.so.9.1 00:02:33.249 LIB libspdk_trace_parser.a 00:02:33.249 SO libspdk_trace_parser.so.5.0 00:02:33.249 SYMLINK libspdk_util.so 00:02:33.507 SYMLINK libspdk_trace_parser.so 00:02:33.507 CC lib/json/json_parse.o 00:02:33.507 CC lib/rdma_utils/rdma_utils.o 00:02:33.507 CC lib/vmd/vmd.o 00:02:33.507 CC lib/json/json_util.o 00:02:33.507 CC lib/vmd/led.o 00:02:33.507 CC lib/json/json_write.o 00:02:33.507 CC lib/idxd/idxd.o 00:02:33.507 CC lib/rdma_provider/common.o 00:02:33.507 CC lib/env_dpdk/env.o 00:02:33.507 CC lib/conf/conf.o 00:02:33.764 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:33.764 CC lib/env_dpdk/memory.o 00:02:33.764 LIB libspdk_conf.a 00:02:33.764 CC lib/idxd/idxd_user.o 00:02:33.764 CC lib/idxd/idxd_kernel.o 00:02:33.764 LIB libspdk_rdma_utils.a 00:02:33.764 SO libspdk_conf.so.6.0 00:02:33.764 SO libspdk_rdma_utils.so.1.0 00:02:33.764 LIB libspdk_json.a 00:02:33.764 LIB libspdk_rdma_provider.a 00:02:33.764 SYMLINK libspdk_conf.so 00:02:33.764 CC lib/env_dpdk/pci.o 00:02:33.764 SO libspdk_json.so.6.0 00:02:33.764 SO libspdk_rdma_provider.so.6.0 00:02:33.764 SYMLINK libspdk_rdma_utils.so 00:02:34.023 CC lib/env_dpdk/init.o 00:02:34.023 SYMLINK libspdk_json.so 00:02:34.023 CC lib/env_dpdk/threads.o 00:02:34.023 SYMLINK libspdk_rdma_provider.so 00:02:34.023 CC lib/env_dpdk/pci_ioat.o 00:02:34.023 CC lib/env_dpdk/pci_virtio.o 00:02:34.023 CC lib/jsonrpc/jsonrpc_server.o 00:02:34.023 CC lib/env_dpdk/pci_vmd.o 00:02:34.023 CC lib/env_dpdk/pci_idxd.o 00:02:34.281 CC lib/env_dpdk/pci_event.o 00:02:34.281 LIB libspdk_idxd.a 00:02:34.281 CC lib/env_dpdk/sigbus_handler.o 00:02:34.281 CC lib/env_dpdk/pci_dpdk.o 00:02:34.281 SO libspdk_idxd.so.12.0 00:02:34.281 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:34.281 LIB libspdk_vmd.a 00:02:34.281 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:34.281 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:34.281 CC lib/jsonrpc/jsonrpc_client.o 00:02:34.281 SO libspdk_vmd.so.6.0 00:02:34.281 SYMLINK libspdk_idxd.so 00:02:34.281 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:34.539 SYMLINK libspdk_vmd.so 00:02:34.539 LIB libspdk_jsonrpc.a 00:02:34.797 SO libspdk_jsonrpc.so.6.0 00:02:34.797 SYMLINK libspdk_jsonrpc.so 00:02:35.056 CC lib/rpc/rpc.o 00:02:35.314 LIB libspdk_rpc.a 00:02:35.314 SO libspdk_rpc.so.6.0 00:02:35.314 SYMLINK libspdk_rpc.so 00:02:35.314 LIB libspdk_env_dpdk.a 00:02:35.573 SO libspdk_env_dpdk.so.14.1 00:02:35.573 CC lib/trace/trace.o 00:02:35.573 CC lib/keyring/keyring_rpc.o 00:02:35.573 CC lib/keyring/keyring.o 00:02:35.573 CC lib/trace/trace_rpc.o 00:02:35.573 CC lib/trace/trace_flags.o 00:02:35.573 CC lib/notify/notify.o 00:02:35.573 CC lib/notify/notify_rpc.o 00:02:35.831 SYMLINK libspdk_env_dpdk.so 00:02:35.831 LIB libspdk_notify.a 00:02:35.831 SO libspdk_notify.so.6.0 00:02:35.831 LIB libspdk_trace.a 00:02:35.831 LIB libspdk_keyring.a 00:02:35.831 SYMLINK libspdk_notify.so 00:02:35.831 SO libspdk_trace.so.10.0 00:02:35.831 SO libspdk_keyring.so.1.0 00:02:36.090 SYMLINK libspdk_trace.so 00:02:36.090 SYMLINK libspdk_keyring.so 00:02:36.348 CC lib/thread/thread.o 00:02:36.348 CC lib/thread/iobuf.o 00:02:36.348 CC lib/sock/sock.o 00:02:36.348 CC lib/sock/sock_rpc.o 00:02:36.915 LIB libspdk_sock.a 00:02:36.915 SO libspdk_sock.so.10.0 00:02:36.915 SYMLINK libspdk_sock.so 00:02:37.173 CC lib/nvme/nvme_ctrlr.o 00:02:37.173 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:37.173 CC lib/nvme/nvme_fabric.o 00:02:37.173 CC lib/nvme/nvme_ns_cmd.o 00:02:37.173 CC lib/nvme/nvme_ns.o 00:02:37.173 CC lib/nvme/nvme_pcie_common.o 00:02:37.173 CC lib/nvme/nvme_pcie.o 00:02:37.173 CC lib/nvme/nvme.o 00:02:37.173 CC lib/nvme/nvme_qpair.o 00:02:38.106 CC lib/nvme/nvme_quirks.o 00:02:38.106 CC lib/nvme/nvme_transport.o 00:02:38.106 CC lib/nvme/nvme_discovery.o 00:02:38.106 LIB libspdk_thread.a 00:02:38.106 SO libspdk_thread.so.10.1 00:02:38.106 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:38.364 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:38.364 CC lib/nvme/nvme_tcp.o 00:02:38.364 SYMLINK libspdk_thread.so 00:02:38.364 CC lib/nvme/nvme_opal.o 00:02:38.364 CC lib/nvme/nvme_io_msg.o 00:02:38.364 CC lib/nvme/nvme_poll_group.o 00:02:38.623 CC lib/nvme/nvme_zns.o 00:02:38.881 CC lib/nvme/nvme_stubs.o 00:02:38.881 CC lib/nvme/nvme_auth.o 00:02:38.881 CC lib/nvme/nvme_cuse.o 00:02:38.881 CC lib/nvme/nvme_rdma.o 00:02:39.139 CC lib/accel/accel.o 00:02:39.139 CC lib/blob/blobstore.o 00:02:39.397 CC lib/init/json_config.o 00:02:39.397 CC lib/init/subsystem.o 00:02:39.397 CC lib/virtio/virtio.o 00:02:39.659 CC lib/virtio/virtio_vhost_user.o 00:02:39.659 CC lib/init/subsystem_rpc.o 00:02:39.659 CC lib/init/rpc.o 00:02:39.916 CC lib/virtio/virtio_vfio_user.o 00:02:39.916 LIB libspdk_init.a 00:02:39.916 CC lib/blob/request.o 00:02:39.916 CC lib/virtio/virtio_pci.o 00:02:39.916 CC lib/blob/zeroes.o 00:02:39.916 SO libspdk_init.so.5.0 00:02:39.916 CC lib/accel/accel_rpc.o 00:02:40.173 SYMLINK libspdk_init.so 00:02:40.173 CC lib/accel/accel_sw.o 00:02:40.173 CC lib/blob/blob_bs_dev.o 00:02:40.431 LIB libspdk_virtio.a 00:02:40.431 SO libspdk_virtio.so.7.0 00:02:40.431 CC lib/event/app.o 00:02:40.431 CC lib/event/log_rpc.o 00:02:40.431 CC lib/event/reactor.o 00:02:40.431 CC lib/event/app_rpc.o 00:02:40.431 CC lib/event/scheduler_static.o 00:02:40.431 SYMLINK libspdk_virtio.so 00:02:40.431 LIB libspdk_accel.a 00:02:40.431 SO libspdk_accel.so.15.1 00:02:40.688 SYMLINK libspdk_accel.so 00:02:40.688 LIB libspdk_nvme.a 00:02:40.946 CC lib/bdev/bdev.o 00:02:40.946 CC lib/bdev/bdev_rpc.o 00:02:40.946 CC lib/bdev/bdev_zone.o 00:02:40.946 CC lib/bdev/part.o 00:02:40.946 CC lib/bdev/scsi_nvme.o 00:02:40.946 SO libspdk_nvme.so.13.1 00:02:40.946 LIB libspdk_event.a 00:02:40.946 SO libspdk_event.so.14.0 00:02:41.204 SYMLINK libspdk_event.so 00:02:41.204 SYMLINK libspdk_nvme.so 00:02:43.753 LIB libspdk_blob.a 00:02:43.753 SO libspdk_blob.so.11.0 00:02:43.753 SYMLINK libspdk_blob.so 00:02:44.012 CC lib/blobfs/blobfs.o 00:02:44.012 CC lib/blobfs/tree.o 00:02:44.012 CC lib/lvol/lvol.o 00:02:44.270 LIB libspdk_bdev.a 00:02:44.270 SO libspdk_bdev.so.15.1 00:02:44.528 SYMLINK libspdk_bdev.so 00:02:44.786 CC lib/nvmf/ctrlr.o 00:02:44.786 CC lib/nvmf/ctrlr_discovery.o 00:02:44.786 CC lib/nvmf/ctrlr_bdev.o 00:02:44.786 CC lib/nbd/nbd.o 00:02:44.786 CC lib/ublk/ublk.o 00:02:44.786 CC lib/nbd/nbd_rpc.o 00:02:44.786 CC lib/scsi/dev.o 00:02:44.786 CC lib/ftl/ftl_core.o 00:02:45.044 CC lib/ftl/ftl_init.o 00:02:45.044 LIB libspdk_blobfs.a 00:02:45.044 CC lib/scsi/lun.o 00:02:45.044 SO libspdk_blobfs.so.10.0 00:02:45.301 LIB libspdk_lvol.a 00:02:45.302 SYMLINK libspdk_blobfs.so 00:02:45.302 CC lib/nvmf/subsystem.o 00:02:45.302 CC lib/nvmf/nvmf.o 00:02:45.302 SO libspdk_lvol.so.10.0 00:02:45.302 LIB libspdk_nbd.a 00:02:45.302 SO libspdk_nbd.so.7.0 00:02:45.302 CC lib/ftl/ftl_layout.o 00:02:45.302 SYMLINK libspdk_lvol.so 00:02:45.302 CC lib/ftl/ftl_debug.o 00:02:45.302 SYMLINK libspdk_nbd.so 00:02:45.302 CC lib/ftl/ftl_io.o 00:02:45.302 CC lib/nvmf/nvmf_rpc.o 00:02:45.560 CC lib/scsi/port.o 00:02:45.560 CC lib/scsi/scsi.o 00:02:45.560 CC lib/ftl/ftl_sb.o 00:02:45.560 CC lib/ublk/ublk_rpc.o 00:02:45.560 CC lib/ftl/ftl_l2p.o 00:02:45.817 CC lib/nvmf/transport.o 00:02:45.817 CC lib/nvmf/tcp.o 00:02:45.817 CC lib/scsi/scsi_bdev.o 00:02:45.817 CC lib/ftl/ftl_l2p_flat.o 00:02:45.818 LIB libspdk_ublk.a 00:02:45.818 SO libspdk_ublk.so.3.0 00:02:45.818 CC lib/scsi/scsi_pr.o 00:02:46.076 SYMLINK libspdk_ublk.so 00:02:46.076 CC lib/ftl/ftl_nv_cache.o 00:02:46.076 CC lib/scsi/scsi_rpc.o 00:02:46.334 CC lib/scsi/task.o 00:02:46.334 CC lib/nvmf/stubs.o 00:02:46.334 CC lib/nvmf/mdns_server.o 00:02:46.334 CC lib/nvmf/rdma.o 00:02:46.334 CC lib/ftl/ftl_band.o 00:02:46.592 LIB libspdk_scsi.a 00:02:46.592 CC lib/nvmf/auth.o 00:02:46.592 SO libspdk_scsi.so.9.0 00:02:46.850 SYMLINK libspdk_scsi.so 00:02:46.850 CC lib/ftl/ftl_band_ops.o 00:02:46.850 CC lib/ftl/ftl_writer.o 00:02:46.850 CC lib/iscsi/conn.o 00:02:46.850 CC lib/ftl/ftl_rq.o 00:02:47.108 CC lib/vhost/vhost.o 00:02:47.108 CC lib/ftl/ftl_reloc.o 00:02:47.108 CC lib/iscsi/init_grp.o 00:02:47.108 CC lib/ftl/ftl_l2p_cache.o 00:02:47.108 CC lib/ftl/ftl_p2l.o 00:02:47.366 CC lib/iscsi/iscsi.o 00:02:47.366 CC lib/vhost/vhost_rpc.o 00:02:47.624 CC lib/ftl/mngt/ftl_mngt.o 00:02:47.624 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:47.624 CC lib/vhost/vhost_scsi.o 00:02:47.624 CC lib/iscsi/md5.o 00:02:47.882 CC lib/vhost/vhost_blk.o 00:02:47.882 CC lib/vhost/rte_vhost_user.o 00:02:47.882 CC lib/iscsi/param.o 00:02:47.882 CC lib/iscsi/portal_grp.o 00:02:47.882 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:47.882 CC lib/iscsi/tgt_node.o 00:02:48.140 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:48.140 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:48.140 CC lib/iscsi/iscsi_subsystem.o 00:02:48.140 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:48.140 CC lib/iscsi/iscsi_rpc.o 00:02:48.398 CC lib/iscsi/task.o 00:02:48.398 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:48.657 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:48.657 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:48.657 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:48.657 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:48.657 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:48.657 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:48.657 CC lib/ftl/utils/ftl_conf.o 00:02:48.915 CC lib/ftl/utils/ftl_md.o 00:02:48.915 CC lib/ftl/utils/ftl_mempool.o 00:02:48.915 CC lib/ftl/utils/ftl_bitmap.o 00:02:48.915 CC lib/ftl/utils/ftl_property.o 00:02:48.915 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:49.173 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:49.173 LIB libspdk_iscsi.a 00:02:49.173 LIB libspdk_vhost.a 00:02:49.173 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:49.173 SO libspdk_iscsi.so.8.0 00:02:49.173 SO libspdk_vhost.so.8.0 00:02:49.173 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:49.173 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:49.431 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:49.431 LIB libspdk_nvmf.a 00:02:49.431 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:49.431 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:49.431 SYMLINK libspdk_vhost.so 00:02:49.431 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:49.431 SYMLINK libspdk_iscsi.so 00:02:49.431 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:49.431 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:49.431 SO libspdk_nvmf.so.18.1 00:02:49.431 CC lib/ftl/base/ftl_base_dev.o 00:02:49.431 CC lib/ftl/base/ftl_base_bdev.o 00:02:49.431 CC lib/ftl/ftl_trace.o 00:02:49.688 SYMLINK libspdk_nvmf.so 00:02:49.946 LIB libspdk_ftl.a 00:02:50.204 SO libspdk_ftl.so.9.0 00:02:50.462 SYMLINK libspdk_ftl.so 00:02:50.720 CC module/env_dpdk/env_dpdk_rpc.o 00:02:50.978 CC module/keyring/file/keyring.o 00:02:50.978 CC module/keyring/linux/keyring.o 00:02:50.978 CC module/scheduler/gscheduler/gscheduler.o 00:02:50.978 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:50.978 CC module/accel/ioat/accel_ioat.o 00:02:50.978 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:50.978 CC module/blob/bdev/blob_bdev.o 00:02:50.978 CC module/sock/posix/posix.o 00:02:50.978 CC module/accel/error/accel_error.o 00:02:50.978 LIB libspdk_env_dpdk_rpc.a 00:02:50.978 SO libspdk_env_dpdk_rpc.so.6.0 00:02:50.978 CC module/keyring/file/keyring_rpc.o 00:02:50.978 SYMLINK libspdk_env_dpdk_rpc.so 00:02:50.978 CC module/accel/error/accel_error_rpc.o 00:02:50.978 LIB libspdk_scheduler_gscheduler.a 00:02:50.978 CC module/keyring/linux/keyring_rpc.o 00:02:51.237 SO libspdk_scheduler_gscheduler.so.4.0 00:02:51.237 LIB libspdk_scheduler_dpdk_governor.a 00:02:51.237 CC module/accel/ioat/accel_ioat_rpc.o 00:02:51.237 LIB libspdk_scheduler_dynamic.a 00:02:51.237 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:51.237 SO libspdk_scheduler_dynamic.so.4.0 00:02:51.237 SYMLINK libspdk_scheduler_gscheduler.so 00:02:51.237 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:51.237 LIB libspdk_keyring_file.a 00:02:51.237 SYMLINK libspdk_scheduler_dynamic.so 00:02:51.237 LIB libspdk_accel_error.a 00:02:51.237 LIB libspdk_keyring_linux.a 00:02:51.237 LIB libspdk_blob_bdev.a 00:02:51.237 SO libspdk_keyring_file.so.1.0 00:02:51.237 SO libspdk_accel_error.so.2.0 00:02:51.237 SO libspdk_keyring_linux.so.1.0 00:02:51.237 SO libspdk_blob_bdev.so.11.0 00:02:51.237 LIB libspdk_accel_ioat.a 00:02:51.237 SYMLINK libspdk_accel_error.so 00:02:51.237 SO libspdk_accel_ioat.so.6.0 00:02:51.237 SYMLINK libspdk_keyring_file.so 00:02:51.237 SYMLINK libspdk_keyring_linux.so 00:02:51.495 SYMLINK libspdk_blob_bdev.so 00:02:51.495 CC module/accel/dsa/accel_dsa.o 00:02:51.495 CC module/accel/dsa/accel_dsa_rpc.o 00:02:51.495 CC module/accel/iaa/accel_iaa.o 00:02:51.495 CC module/accel/iaa/accel_iaa_rpc.o 00:02:51.495 SYMLINK libspdk_accel_ioat.so 00:02:51.753 CC module/blobfs/bdev/blobfs_bdev.o 00:02:51.753 CC module/bdev/error/vbdev_error.o 00:02:51.753 CC module/bdev/delay/vbdev_delay.o 00:02:51.753 CC module/bdev/gpt/gpt.o 00:02:51.753 CC module/bdev/lvol/vbdev_lvol.o 00:02:51.753 LIB libspdk_accel_dsa.a 00:02:51.753 CC module/bdev/malloc/bdev_malloc.o 00:02:51.753 SO libspdk_accel_dsa.so.5.0 00:02:51.753 LIB libspdk_accel_iaa.a 00:02:51.753 CC module/bdev/null/bdev_null.o 00:02:51.753 SO libspdk_accel_iaa.so.3.0 00:02:51.753 SYMLINK libspdk_accel_dsa.so 00:02:51.753 CC module/bdev/gpt/vbdev_gpt.o 00:02:51.753 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:51.753 SYMLINK libspdk_accel_iaa.so 00:02:51.753 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:51.753 CC module/bdev/error/vbdev_error_rpc.o 00:02:52.011 LIB libspdk_sock_posix.a 00:02:52.011 SO libspdk_sock_posix.so.6.0 00:02:52.011 CC module/bdev/null/bdev_null_rpc.o 00:02:52.011 LIB libspdk_blobfs_bdev.a 00:02:52.011 SYMLINK libspdk_sock_posix.so 00:02:52.011 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:52.011 LIB libspdk_bdev_error.a 00:02:52.011 SO libspdk_blobfs_bdev.so.6.0 00:02:52.011 SO libspdk_bdev_error.so.6.0 00:02:52.011 LIB libspdk_bdev_gpt.a 00:02:52.011 SYMLINK libspdk_blobfs_bdev.so 00:02:52.269 LIB libspdk_bdev_null.a 00:02:52.269 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:52.269 SYMLINK libspdk_bdev_error.so 00:02:52.269 SO libspdk_bdev_gpt.so.6.0 00:02:52.269 SO libspdk_bdev_null.so.6.0 00:02:52.269 CC module/bdev/nvme/bdev_nvme.o 00:02:52.269 LIB libspdk_bdev_delay.a 00:02:52.269 SYMLINK libspdk_bdev_gpt.so 00:02:52.269 SYMLINK libspdk_bdev_null.so 00:02:52.269 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:52.269 SO libspdk_bdev_delay.so.6.0 00:02:52.269 LIB libspdk_bdev_lvol.a 00:02:52.269 LIB libspdk_bdev_malloc.a 00:02:52.269 CC module/bdev/passthru/vbdev_passthru.o 00:02:52.269 SO libspdk_bdev_lvol.so.6.0 00:02:52.269 CC module/bdev/raid/bdev_raid.o 00:02:52.269 SYMLINK libspdk_bdev_delay.so 00:02:52.269 CC module/bdev/raid/bdev_raid_rpc.o 00:02:52.527 SO libspdk_bdev_malloc.so.6.0 00:02:52.527 CC module/bdev/split/vbdev_split.o 00:02:52.527 SYMLINK libspdk_bdev_lvol.so 00:02:52.527 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:52.527 SYMLINK libspdk_bdev_malloc.so 00:02:52.527 CC module/bdev/xnvme/bdev_xnvme.o 00:02:52.527 CC module/bdev/nvme/nvme_rpc.o 00:02:52.527 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:52.527 CC module/bdev/aio/bdev_aio.o 00:02:52.786 CC module/bdev/split/vbdev_split_rpc.o 00:02:52.786 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:52.786 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:52.786 CC module/bdev/nvme/bdev_mdns_client.o 00:02:52.786 CC module/bdev/nvme/vbdev_opal.o 00:02:52.786 LIB libspdk_bdev_split.a 00:02:52.786 LIB libspdk_bdev_zone_block.a 00:02:52.786 SO libspdk_bdev_split.so.6.0 00:02:52.786 LIB libspdk_bdev_passthru.a 00:02:53.044 SO libspdk_bdev_zone_block.so.6.0 00:02:53.044 SO libspdk_bdev_passthru.so.6.0 00:02:53.044 LIB libspdk_bdev_xnvme.a 00:02:53.044 SYMLINK libspdk_bdev_zone_block.so 00:02:53.044 SYMLINK libspdk_bdev_split.so 00:02:53.044 SO libspdk_bdev_xnvme.so.3.0 00:02:53.044 SYMLINK libspdk_bdev_passthru.so 00:02:53.044 CC module/bdev/aio/bdev_aio_rpc.o 00:02:53.044 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:53.044 CC module/bdev/raid/bdev_raid_sb.o 00:02:53.044 SYMLINK libspdk_bdev_xnvme.so 00:02:53.044 CC module/bdev/raid/raid0.o 00:02:53.044 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:53.044 CC module/bdev/ftl/bdev_ftl.o 00:02:53.044 CC module/bdev/iscsi/bdev_iscsi.o 00:02:53.302 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:53.302 LIB libspdk_bdev_aio.a 00:02:53.302 SO libspdk_bdev_aio.so.6.0 00:02:53.302 CC module/bdev/raid/raid1.o 00:02:53.302 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:53.302 SYMLINK libspdk_bdev_aio.so 00:02:53.302 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:53.302 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:53.560 CC module/bdev/raid/concat.o 00:02:53.560 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:53.560 LIB libspdk_bdev_ftl.a 00:02:53.560 SO libspdk_bdev_ftl.so.6.0 00:02:53.560 SYMLINK libspdk_bdev_ftl.so 00:02:53.560 LIB libspdk_bdev_iscsi.a 00:02:53.819 LIB libspdk_bdev_raid.a 00:02:53.819 SO libspdk_bdev_iscsi.so.6.0 00:02:53.819 SO libspdk_bdev_raid.so.6.0 00:02:53.819 SYMLINK libspdk_bdev_iscsi.so 00:02:53.819 LIB libspdk_bdev_virtio.a 00:02:53.819 SYMLINK libspdk_bdev_raid.so 00:02:53.819 SO libspdk_bdev_virtio.so.6.0 00:02:54.078 SYMLINK libspdk_bdev_virtio.so 00:02:55.552 LIB libspdk_bdev_nvme.a 00:02:55.552 SO libspdk_bdev_nvme.so.7.0 00:02:55.552 SYMLINK libspdk_bdev_nvme.so 00:02:55.810 CC module/event/subsystems/iobuf/iobuf.o 00:02:55.810 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:55.810 CC module/event/subsystems/vmd/vmd.o 00:02:55.810 CC module/event/subsystems/scheduler/scheduler.o 00:02:55.810 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:55.810 CC module/event/subsystems/sock/sock.o 00:02:55.810 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:55.810 CC module/event/subsystems/keyring/keyring.o 00:02:56.068 LIB libspdk_event_keyring.a 00:02:56.068 LIB libspdk_event_vhost_blk.a 00:02:56.068 LIB libspdk_event_scheduler.a 00:02:56.068 LIB libspdk_event_sock.a 00:02:56.068 LIB libspdk_event_vmd.a 00:02:56.068 LIB libspdk_event_iobuf.a 00:02:56.068 SO libspdk_event_keyring.so.1.0 00:02:56.068 SO libspdk_event_vhost_blk.so.3.0 00:02:56.068 SO libspdk_event_scheduler.so.4.0 00:02:56.068 SO libspdk_event_sock.so.5.0 00:02:56.068 SO libspdk_event_vmd.so.6.0 00:02:56.068 SO libspdk_event_iobuf.so.3.0 00:02:56.068 SYMLINK libspdk_event_scheduler.so 00:02:56.068 SYMLINK libspdk_event_keyring.so 00:02:56.326 SYMLINK libspdk_event_vhost_blk.so 00:02:56.326 SYMLINK libspdk_event_sock.so 00:02:56.326 SYMLINK libspdk_event_vmd.so 00:02:56.326 SYMLINK libspdk_event_iobuf.so 00:02:56.585 CC module/event/subsystems/accel/accel.o 00:02:56.585 LIB libspdk_event_accel.a 00:02:56.843 SO libspdk_event_accel.so.6.0 00:02:56.843 SYMLINK libspdk_event_accel.so 00:02:57.101 CC module/event/subsystems/bdev/bdev.o 00:02:57.359 LIB libspdk_event_bdev.a 00:02:57.360 SO libspdk_event_bdev.so.6.0 00:02:57.360 SYMLINK libspdk_event_bdev.so 00:02:57.617 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:57.617 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:57.617 CC module/event/subsystems/nbd/nbd.o 00:02:57.617 CC module/event/subsystems/scsi/scsi.o 00:02:57.617 CC module/event/subsystems/ublk/ublk.o 00:02:57.876 LIB libspdk_event_nbd.a 00:02:57.876 LIB libspdk_event_ublk.a 00:02:57.876 LIB libspdk_event_scsi.a 00:02:57.876 SO libspdk_event_nbd.so.6.0 00:02:57.876 SO libspdk_event_ublk.so.3.0 00:02:57.876 SO libspdk_event_scsi.so.6.0 00:02:57.876 SYMLINK libspdk_event_ublk.so 00:02:57.876 LIB libspdk_event_nvmf.a 00:02:57.876 SYMLINK libspdk_event_nbd.so 00:02:57.876 SYMLINK libspdk_event_scsi.so 00:02:57.876 SO libspdk_event_nvmf.so.6.0 00:02:57.876 SYMLINK libspdk_event_nvmf.so 00:02:58.134 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:58.134 CC module/event/subsystems/iscsi/iscsi.o 00:02:58.393 LIB libspdk_event_iscsi.a 00:02:58.393 LIB libspdk_event_vhost_scsi.a 00:02:58.393 SO libspdk_event_iscsi.so.6.0 00:02:58.393 SO libspdk_event_vhost_scsi.so.3.0 00:02:58.393 SYMLINK libspdk_event_iscsi.so 00:02:58.393 SYMLINK libspdk_event_vhost_scsi.so 00:02:58.651 SO libspdk.so.6.0 00:02:58.651 SYMLINK libspdk.so 00:02:58.909 CC app/spdk_lspci/spdk_lspci.o 00:02:58.909 CXX app/trace/trace.o 00:02:58.909 CC app/trace_record/trace_record.o 00:02:58.909 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:58.909 CC app/nvmf_tgt/nvmf_main.o 00:02:58.909 CC app/iscsi_tgt/iscsi_tgt.o 00:02:58.909 CC app/spdk_tgt/spdk_tgt.o 00:02:58.909 CC examples/util/zipf/zipf.o 00:02:58.909 CC examples/ioat/perf/perf.o 00:02:58.909 CC test/thread/poller_perf/poller_perf.o 00:02:59.168 LINK spdk_lspci 00:02:59.168 LINK interrupt_tgt 00:02:59.168 LINK zipf 00:02:59.168 LINK poller_perf 00:02:59.168 LINK spdk_tgt 00:02:59.168 LINK spdk_trace_record 00:02:59.168 LINK nvmf_tgt 00:02:59.168 LINK iscsi_tgt 00:02:59.168 LINK ioat_perf 00:02:59.427 CC app/spdk_nvme_perf/perf.o 00:02:59.427 LINK spdk_trace 00:02:59.427 CC app/spdk_nvme_identify/identify.o 00:02:59.427 TEST_HEADER include/spdk/accel.h 00:02:59.427 TEST_HEADER include/spdk/accel_module.h 00:02:59.427 TEST_HEADER include/spdk/assert.h 00:02:59.427 TEST_HEADER include/spdk/barrier.h 00:02:59.427 TEST_HEADER include/spdk/base64.h 00:02:59.427 TEST_HEADER include/spdk/bdev.h 00:02:59.427 TEST_HEADER include/spdk/bdev_module.h 00:02:59.427 TEST_HEADER include/spdk/bdev_zone.h 00:02:59.427 TEST_HEADER include/spdk/bit_array.h 00:02:59.427 TEST_HEADER include/spdk/bit_pool.h 00:02:59.427 TEST_HEADER include/spdk/blob_bdev.h 00:02:59.427 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:59.427 CC examples/ioat/verify/verify.o 00:02:59.427 TEST_HEADER include/spdk/blobfs.h 00:02:59.427 TEST_HEADER include/spdk/blob.h 00:02:59.427 TEST_HEADER include/spdk/conf.h 00:02:59.427 TEST_HEADER include/spdk/config.h 00:02:59.427 TEST_HEADER include/spdk/cpuset.h 00:02:59.427 TEST_HEADER include/spdk/crc16.h 00:02:59.427 TEST_HEADER include/spdk/crc32.h 00:02:59.427 TEST_HEADER include/spdk/crc64.h 00:02:59.427 TEST_HEADER include/spdk/dif.h 00:02:59.427 TEST_HEADER include/spdk/dma.h 00:02:59.427 TEST_HEADER include/spdk/endian.h 00:02:59.427 TEST_HEADER include/spdk/env_dpdk.h 00:02:59.427 TEST_HEADER include/spdk/env.h 00:02:59.427 TEST_HEADER include/spdk/event.h 00:02:59.427 TEST_HEADER include/spdk/fd_group.h 00:02:59.427 TEST_HEADER include/spdk/fd.h 00:02:59.427 TEST_HEADER include/spdk/file.h 00:02:59.427 TEST_HEADER include/spdk/ftl.h 00:02:59.427 TEST_HEADER include/spdk/gpt_spec.h 00:02:59.427 TEST_HEADER include/spdk/hexlify.h 00:02:59.427 TEST_HEADER include/spdk/histogram_data.h 00:02:59.427 TEST_HEADER include/spdk/idxd.h 00:02:59.685 TEST_HEADER include/spdk/idxd_spec.h 00:02:59.685 TEST_HEADER include/spdk/init.h 00:02:59.685 TEST_HEADER include/spdk/ioat.h 00:02:59.685 TEST_HEADER include/spdk/ioat_spec.h 00:02:59.685 CC app/spdk_nvme_discover/discovery_aer.o 00:02:59.685 TEST_HEADER include/spdk/iscsi_spec.h 00:02:59.685 CC app/spdk_top/spdk_top.o 00:02:59.685 TEST_HEADER include/spdk/json.h 00:02:59.685 TEST_HEADER include/spdk/jsonrpc.h 00:02:59.685 TEST_HEADER include/spdk/keyring.h 00:02:59.685 TEST_HEADER include/spdk/keyring_module.h 00:02:59.685 TEST_HEADER include/spdk/likely.h 00:02:59.685 TEST_HEADER include/spdk/log.h 00:02:59.685 TEST_HEADER include/spdk/lvol.h 00:02:59.685 TEST_HEADER include/spdk/memory.h 00:02:59.685 TEST_HEADER include/spdk/mmio.h 00:02:59.685 TEST_HEADER include/spdk/nbd.h 00:02:59.685 TEST_HEADER include/spdk/notify.h 00:02:59.685 TEST_HEADER include/spdk/nvme.h 00:02:59.685 TEST_HEADER include/spdk/nvme_intel.h 00:02:59.685 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:59.685 CC test/dma/test_dma/test_dma.o 00:02:59.685 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:59.685 TEST_HEADER include/spdk/nvme_spec.h 00:02:59.685 TEST_HEADER include/spdk/nvme_zns.h 00:02:59.685 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:59.685 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:59.685 TEST_HEADER include/spdk/nvmf.h 00:02:59.685 TEST_HEADER include/spdk/nvmf_spec.h 00:02:59.685 TEST_HEADER include/spdk/nvmf_transport.h 00:02:59.685 TEST_HEADER include/spdk/opal.h 00:02:59.685 TEST_HEADER include/spdk/opal_spec.h 00:02:59.685 TEST_HEADER include/spdk/pci_ids.h 00:02:59.685 TEST_HEADER include/spdk/pipe.h 00:02:59.685 CC test/app/bdev_svc/bdev_svc.o 00:02:59.685 TEST_HEADER include/spdk/queue.h 00:02:59.685 TEST_HEADER include/spdk/reduce.h 00:02:59.685 TEST_HEADER include/spdk/rpc.h 00:02:59.685 TEST_HEADER include/spdk/scheduler.h 00:02:59.685 TEST_HEADER include/spdk/scsi.h 00:02:59.685 TEST_HEADER include/spdk/scsi_spec.h 00:02:59.685 TEST_HEADER include/spdk/sock.h 00:02:59.685 TEST_HEADER include/spdk/stdinc.h 00:02:59.685 TEST_HEADER include/spdk/string.h 00:02:59.685 TEST_HEADER include/spdk/thread.h 00:02:59.685 TEST_HEADER include/spdk/trace.h 00:02:59.685 TEST_HEADER include/spdk/trace_parser.h 00:02:59.685 TEST_HEADER include/spdk/tree.h 00:02:59.685 TEST_HEADER include/spdk/ublk.h 00:02:59.685 TEST_HEADER include/spdk/util.h 00:02:59.685 TEST_HEADER include/spdk/uuid.h 00:02:59.685 TEST_HEADER include/spdk/version.h 00:02:59.685 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:59.685 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:59.685 TEST_HEADER include/spdk/vhost.h 00:02:59.685 TEST_HEADER include/spdk/vmd.h 00:02:59.685 TEST_HEADER include/spdk/xor.h 00:02:59.685 TEST_HEADER include/spdk/zipf.h 00:02:59.685 CXX test/cpp_headers/accel.o 00:02:59.685 CC test/env/mem_callbacks/mem_callbacks.o 00:02:59.685 LINK verify 00:02:59.685 LINK spdk_nvme_discover 00:02:59.685 CC examples/thread/thread/thread_ex.o 00:02:59.943 LINK bdev_svc 00:02:59.943 CXX test/cpp_headers/accel_module.o 00:02:59.943 LINK test_dma 00:02:59.943 CXX test/cpp_headers/assert.o 00:03:00.201 LINK thread 00:03:00.201 CC examples/sock/hello_world/hello_sock.o 00:03:00.201 CC examples/vmd/lsvmd/lsvmd.o 00:03:00.202 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:00.202 CXX test/cpp_headers/barrier.o 00:03:00.202 LINK lsvmd 00:03:00.460 LINK mem_callbacks 00:03:00.460 CC examples/vmd/led/led.o 00:03:00.460 CXX test/cpp_headers/base64.o 00:03:00.460 LINK spdk_nvme_perf 00:03:00.460 LINK hello_sock 00:03:00.460 CXX test/cpp_headers/bdev.o 00:03:00.460 LINK spdk_nvme_identify 00:03:00.460 CC examples/idxd/perf/perf.o 00:03:00.460 LINK led 00:03:00.718 CC test/env/vtophys/vtophys.o 00:03:00.718 CXX test/cpp_headers/bdev_module.o 00:03:00.718 LINK spdk_top 00:03:00.718 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:00.718 CC test/env/memory/memory_ut.o 00:03:00.718 LINK nvme_fuzz 00:03:00.718 CC test/env/pci/pci_ut.o 00:03:00.718 LINK vtophys 00:03:00.718 CXX test/cpp_headers/bdev_zone.o 00:03:00.977 CXX test/cpp_headers/bit_array.o 00:03:00.977 LINK env_dpdk_post_init 00:03:00.977 CXX test/cpp_headers/bit_pool.o 00:03:00.977 CC examples/accel/perf/accel_perf.o 00:03:00.977 LINK idxd_perf 00:03:00.977 CXX test/cpp_headers/blob_bdev.o 00:03:00.977 CC app/spdk_dd/spdk_dd.o 00:03:00.977 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:00.977 CXX test/cpp_headers/blobfs_bdev.o 00:03:01.236 CXX test/cpp_headers/blobfs.o 00:03:01.236 CXX test/cpp_headers/blob.o 00:03:01.236 LINK pci_ut 00:03:01.236 CC examples/blob/hello_world/hello_blob.o 00:03:01.236 CXX test/cpp_headers/conf.o 00:03:01.236 CC test/app/histogram_perf/histogram_perf.o 00:03:01.236 CXX test/cpp_headers/config.o 00:03:01.495 CXX test/cpp_headers/cpuset.o 00:03:01.495 CC examples/blob/cli/blobcli.o 00:03:01.495 LINK spdk_dd 00:03:01.495 LINK histogram_perf 00:03:01.495 CXX test/cpp_headers/crc16.o 00:03:01.495 LINK accel_perf 00:03:01.495 CC test/app/jsoncat/jsoncat.o 00:03:01.495 LINK hello_blob 00:03:01.754 CXX test/cpp_headers/crc32.o 00:03:01.754 LINK jsoncat 00:03:01.754 CC test/event/event_perf/event_perf.o 00:03:01.754 CXX test/cpp_headers/crc64.o 00:03:01.754 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:01.754 CC app/fio/nvme/fio_plugin.o 00:03:02.013 LINK event_perf 00:03:02.013 CC examples/nvme/hello_world/hello_world.o 00:03:02.013 CC examples/nvme/reconnect/reconnect.o 00:03:02.013 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:02.013 CXX test/cpp_headers/dif.o 00:03:02.013 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:02.013 LINK blobcli 00:03:02.013 LINK memory_ut 00:03:02.273 CXX test/cpp_headers/dma.o 00:03:02.273 CC test/event/reactor/reactor.o 00:03:02.273 LINK hello_world 00:03:02.273 LINK reactor 00:03:02.273 CXX test/cpp_headers/endian.o 00:03:02.273 LINK reconnect 00:03:02.273 CC examples/nvme/arbitration/arbitration.o 00:03:02.532 CC app/vhost/vhost.o 00:03:02.532 CC test/app/stub/stub.o 00:03:02.532 LINK vhost_fuzz 00:03:02.532 CXX test/cpp_headers/env_dpdk.o 00:03:02.532 CC test/event/reactor_perf/reactor_perf.o 00:03:02.532 LINK spdk_nvme 00:03:02.532 CC examples/nvme/hotplug/hotplug.o 00:03:02.532 LINK vhost 00:03:02.532 LINK nvme_manage 00:03:02.790 LINK stub 00:03:02.790 CXX test/cpp_headers/env.o 00:03:02.790 LINK reactor_perf 00:03:02.790 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:02.790 LINK arbitration 00:03:02.790 CC app/fio/bdev/fio_plugin.o 00:03:02.790 CXX test/cpp_headers/event.o 00:03:02.790 LINK hotplug 00:03:02.790 CC examples/nvme/abort/abort.o 00:03:02.790 LINK cmb_copy 00:03:02.790 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:03.048 CC test/event/app_repeat/app_repeat.o 00:03:03.048 CXX test/cpp_headers/fd_group.o 00:03:03.048 CC test/nvme/aer/aer.o 00:03:03.048 LINK pmr_persistence 00:03:03.048 LINK app_repeat 00:03:03.048 CC test/nvme/reset/reset.o 00:03:03.048 CC examples/bdev/hello_world/hello_bdev.o 00:03:03.307 LINK iscsi_fuzz 00:03:03.307 CC examples/bdev/bdevperf/bdevperf.o 00:03:03.307 CXX test/cpp_headers/fd.o 00:03:03.307 CXX test/cpp_headers/file.o 00:03:03.307 LINK abort 00:03:03.307 LINK aer 00:03:03.307 LINK spdk_bdev 00:03:03.307 CC test/event/scheduler/scheduler.o 00:03:03.307 LINK hello_bdev 00:03:03.565 LINK reset 00:03:03.565 CXX test/cpp_headers/ftl.o 00:03:03.565 CC test/rpc_client/rpc_client_test.o 00:03:03.565 CXX test/cpp_headers/gpt_spec.o 00:03:03.565 LINK scheduler 00:03:03.823 CC test/accel/dif/dif.o 00:03:03.823 CXX test/cpp_headers/hexlify.o 00:03:03.823 CC test/nvme/sgl/sgl.o 00:03:03.823 CXX test/cpp_headers/histogram_data.o 00:03:03.823 LINK rpc_client_test 00:03:03.823 CC test/nvme/e2edp/nvme_dp.o 00:03:03.823 CC test/blobfs/mkfs/mkfs.o 00:03:03.823 CC test/lvol/esnap/esnap.o 00:03:03.823 CXX test/cpp_headers/idxd.o 00:03:04.081 CC test/nvme/overhead/overhead.o 00:03:04.081 CC test/nvme/err_injection/err_injection.o 00:03:04.081 LINK mkfs 00:03:04.081 CC test/nvme/startup/startup.o 00:03:04.081 LINK sgl 00:03:04.081 CXX test/cpp_headers/idxd_spec.o 00:03:04.081 LINK nvme_dp 00:03:04.081 LINK bdevperf 00:03:04.081 LINK err_injection 00:03:04.339 LINK dif 00:03:04.339 LINK startup 00:03:04.339 CXX test/cpp_headers/init.o 00:03:04.339 LINK overhead 00:03:04.339 CC test/nvme/reserve/reserve.o 00:03:04.339 CC test/nvme/simple_copy/simple_copy.o 00:03:04.339 CXX test/cpp_headers/ioat.o 00:03:04.339 CC test/nvme/connect_stress/connect_stress.o 00:03:04.339 CXX test/cpp_headers/ioat_spec.o 00:03:04.339 CXX test/cpp_headers/iscsi_spec.o 00:03:04.598 CXX test/cpp_headers/json.o 00:03:04.598 CC test/nvme/boot_partition/boot_partition.o 00:03:04.598 LINK reserve 00:03:04.598 LINK connect_stress 00:03:04.598 CC examples/nvmf/nvmf/nvmf.o 00:03:04.598 CC test/nvme/compliance/nvme_compliance.o 00:03:04.598 LINK simple_copy 00:03:04.598 CC test/nvme/fused_ordering/fused_ordering.o 00:03:04.598 CXX test/cpp_headers/jsonrpc.o 00:03:04.598 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:04.856 LINK boot_partition 00:03:04.856 CC test/nvme/fdp/fdp.o 00:03:04.856 CC test/nvme/cuse/cuse.o 00:03:04.856 CXX test/cpp_headers/keyring.o 00:03:04.856 CXX test/cpp_headers/keyring_module.o 00:03:04.856 LINK doorbell_aers 00:03:04.856 LINK fused_ordering 00:03:04.856 LINK nvmf 00:03:05.115 LINK nvme_compliance 00:03:05.115 CXX test/cpp_headers/likely.o 00:03:05.115 CC test/bdev/bdevio/bdevio.o 00:03:05.115 CXX test/cpp_headers/log.o 00:03:05.115 CXX test/cpp_headers/lvol.o 00:03:05.115 CXX test/cpp_headers/memory.o 00:03:05.115 CXX test/cpp_headers/mmio.o 00:03:05.115 CXX test/cpp_headers/nbd.o 00:03:05.115 CXX test/cpp_headers/notify.o 00:03:05.373 CXX test/cpp_headers/nvme.o 00:03:05.373 LINK fdp 00:03:05.373 CXX test/cpp_headers/nvme_intel.o 00:03:05.373 CXX test/cpp_headers/nvme_ocssd.o 00:03:05.373 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:05.373 CXX test/cpp_headers/nvme_spec.o 00:03:05.373 CXX test/cpp_headers/nvme_zns.o 00:03:05.373 CXX test/cpp_headers/nvmf_cmd.o 00:03:05.373 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:05.373 CXX test/cpp_headers/nvmf.o 00:03:05.632 CXX test/cpp_headers/nvmf_spec.o 00:03:05.632 CXX test/cpp_headers/nvmf_transport.o 00:03:05.632 CXX test/cpp_headers/opal.o 00:03:05.632 LINK bdevio 00:03:05.632 CXX test/cpp_headers/opal_spec.o 00:03:05.632 CXX test/cpp_headers/pci_ids.o 00:03:05.632 CXX test/cpp_headers/pipe.o 00:03:05.632 CXX test/cpp_headers/queue.o 00:03:05.632 CXX test/cpp_headers/reduce.o 00:03:05.632 CXX test/cpp_headers/rpc.o 00:03:05.632 CXX test/cpp_headers/scheduler.o 00:03:05.632 CXX test/cpp_headers/scsi.o 00:03:05.890 CXX test/cpp_headers/scsi_spec.o 00:03:05.890 CXX test/cpp_headers/sock.o 00:03:05.890 CXX test/cpp_headers/stdinc.o 00:03:05.890 CXX test/cpp_headers/string.o 00:03:05.890 CXX test/cpp_headers/thread.o 00:03:05.890 CXX test/cpp_headers/trace.o 00:03:05.890 CXX test/cpp_headers/trace_parser.o 00:03:05.890 CXX test/cpp_headers/tree.o 00:03:05.890 CXX test/cpp_headers/ublk.o 00:03:05.890 CXX test/cpp_headers/util.o 00:03:05.890 CXX test/cpp_headers/uuid.o 00:03:05.890 CXX test/cpp_headers/version.o 00:03:05.890 CXX test/cpp_headers/vfio_user_pci.o 00:03:06.147 CXX test/cpp_headers/vfio_user_spec.o 00:03:06.147 CXX test/cpp_headers/vhost.o 00:03:06.147 CXX test/cpp_headers/vmd.o 00:03:06.147 CXX test/cpp_headers/xor.o 00:03:06.147 CXX test/cpp_headers/zipf.o 00:03:06.406 LINK cuse 00:03:10.593 LINK esnap 00:03:11.160 00:03:11.160 real 1m17.232s 00:03:11.160 user 7m28.067s 00:03:11.160 sys 1m31.536s 00:03:11.160 15:11:24 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:11.160 15:11:24 make -- common/autotest_common.sh@10 -- $ set +x 00:03:11.160 ************************************ 00:03:11.160 END TEST make 00:03:11.160 ************************************ 00:03:11.160 15:11:24 -- common/autotest_common.sh@1142 -- $ return 0 00:03:11.160 15:11:24 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:11.160 15:11:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:11.160 15:11:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:11.160 15:11:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.160 15:11:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:11.160 15:11:24 -- pm/common@44 -- $ pid=5224 00:03:11.160 15:11:24 -- pm/common@50 -- $ kill -TERM 5224 00:03:11.160 15:11:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.160 15:11:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:11.160 15:11:24 -- pm/common@44 -- $ pid=5226 00:03:11.160 15:11:24 -- pm/common@50 -- $ kill -TERM 5226 00:03:11.160 15:11:24 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:11.160 15:11:24 -- nvmf/common.sh@7 -- # uname -s 00:03:11.160 15:11:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:11.160 15:11:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:11.160 15:11:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:11.160 15:11:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:11.160 15:11:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:11.160 15:11:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:11.160 15:11:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:11.160 15:11:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:11.160 15:11:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:11.160 15:11:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:11.160 15:11:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e52e2e0-5ec1-4d08-b2ca-1e4c6bc2e59a 00:03:11.160 15:11:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=2e52e2e0-5ec1-4d08-b2ca-1e4c6bc2e59a 00:03:11.160 15:11:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:11.160 15:11:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:11.160 15:11:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:11.160 15:11:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:11.160 15:11:24 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:11.160 15:11:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:11.160 15:11:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:11.160 15:11:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:11.160 15:11:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.160 15:11:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.160 15:11:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.160 15:11:24 -- paths/export.sh@5 -- # export PATH 00:03:11.160 15:11:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.160 15:11:24 -- nvmf/common.sh@47 -- # : 0 00:03:11.160 15:11:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:11.160 15:11:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:11.160 15:11:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:11.160 15:11:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:11.160 15:11:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:11.160 15:11:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:11.160 15:11:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:11.160 15:11:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:11.160 15:11:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:11.160 15:11:24 -- spdk/autotest.sh@32 -- # uname -s 00:03:11.160 15:11:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:11.160 15:11:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:11.160 15:11:24 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:11.160 15:11:24 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:11.160 15:11:24 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:11.160 15:11:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:11.160 15:11:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:11.160 15:11:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:11.160 15:11:24 -- spdk/autotest.sh@48 -- # udevadm_pid=53756 00:03:11.160 15:11:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:11.160 15:11:24 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:11.160 15:11:24 -- pm/common@17 -- # local monitor 00:03:11.160 15:11:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.160 15:11:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.160 15:11:24 -- pm/common@25 -- # sleep 1 00:03:11.160 15:11:24 -- pm/common@21 -- # date +%s 00:03:11.160 15:11:24 -- pm/common@21 -- # date +%s 00:03:11.160 15:11:24 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720710684 00:03:11.160 15:11:24 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720710684 00:03:11.419 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720710684_collect-cpu-load.pm.log 00:03:11.419 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720710684_collect-vmstat.pm.log 00:03:12.354 15:11:25 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:12.354 15:11:25 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:12.354 15:11:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:12.354 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:03:12.354 15:11:25 -- spdk/autotest.sh@59 -- # create_test_list 00:03:12.354 15:11:25 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:12.354 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:03:12.354 15:11:25 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:12.354 15:11:25 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:12.354 15:11:25 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:12.354 15:11:25 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:12.354 15:11:25 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:12.354 15:11:25 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:12.354 15:11:25 -- common/autotest_common.sh@1455 -- # uname 00:03:12.354 15:11:25 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:12.354 15:11:25 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:12.354 15:11:25 -- common/autotest_common.sh@1475 -- # uname 00:03:12.354 15:11:25 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:12.354 15:11:25 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:12.354 15:11:25 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:12.354 15:11:25 -- spdk/autotest.sh@72 -- # hash lcov 00:03:12.354 15:11:25 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:12.354 15:11:25 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:12.354 --rc lcov_branch_coverage=1 00:03:12.354 --rc lcov_function_coverage=1 00:03:12.354 --rc genhtml_branch_coverage=1 00:03:12.354 --rc genhtml_function_coverage=1 00:03:12.354 --rc genhtml_legend=1 00:03:12.354 --rc geninfo_all_blocks=1 00:03:12.354 ' 00:03:12.354 15:11:25 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:12.354 --rc lcov_branch_coverage=1 00:03:12.354 --rc lcov_function_coverage=1 00:03:12.354 --rc genhtml_branch_coverage=1 00:03:12.354 --rc genhtml_function_coverage=1 00:03:12.354 --rc genhtml_legend=1 00:03:12.354 --rc geninfo_all_blocks=1 00:03:12.354 ' 00:03:12.354 15:11:25 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:12.354 --rc lcov_branch_coverage=1 00:03:12.354 --rc lcov_function_coverage=1 00:03:12.354 --rc genhtml_branch_coverage=1 00:03:12.354 --rc genhtml_function_coverage=1 00:03:12.354 --rc genhtml_legend=1 00:03:12.354 --rc geninfo_all_blocks=1 00:03:12.354 --no-external' 00:03:12.354 15:11:25 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:12.354 --rc lcov_branch_coverage=1 00:03:12.354 --rc lcov_function_coverage=1 00:03:12.354 --rc genhtml_branch_coverage=1 00:03:12.354 --rc genhtml_function_coverage=1 00:03:12.354 --rc genhtml_legend=1 00:03:12.354 --rc geninfo_all_blocks=1 00:03:12.354 --no-external' 00:03:12.354 15:11:25 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:12.354 lcov: LCOV version 1.14 00:03:12.354 15:11:25 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:27.227 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:27.227 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:39.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:39.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:39.447 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:39.447 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:42.734 15:11:55 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:42.734 15:11:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:42.734 15:11:55 -- common/autotest_common.sh@10 -- # set +x 00:03:42.734 15:11:55 -- spdk/autotest.sh@91 -- # rm -f 00:03:42.734 15:11:55 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:42.734 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:43.302 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:43.302 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:43.302 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:43.302 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:43.561 15:11:56 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:43.561 15:11:56 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:43.561 15:11:56 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:43.561 15:11:56 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:43.561 15:11:56 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:43.561 15:11:56 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:43.561 15:11:56 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:43.561 15:11:56 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:43.561 15:11:56 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:43.561 15:11:56 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:43.561 15:11:56 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:43.561 15:11:56 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:43.561 15:11:56 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:43.561 15:11:56 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:43.561 15:11:56 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:43.561 15:11:56 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:03:43.561 15:11:56 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:03:43.561 15:11:56 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:43.561 15:11:56 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:43.561 15:11:56 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:43.561 15:11:56 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:03:43.561 15:11:56 -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:03:43.561 15:11:56 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:43.561 15:11:56 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:43.561 15:11:56 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:43.561 15:11:56 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:03:43.561 15:11:56 -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:03:43.561 15:11:56 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:43.561 15:11:56 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:43.561 15:11:56 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:43.561 15:11:56 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:03:43.561 15:11:56 -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:03:43.561 15:11:56 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:43.561 15:11:56 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:43.561 15:11:56 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:43.561 15:11:56 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:03:43.561 15:11:56 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:03:43.561 15:11:56 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:43.561 15:11:56 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:43.561 15:11:56 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:43.561 15:11:56 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:43.561 15:11:56 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:43.561 15:11:56 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:43.561 15:11:56 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:43.561 15:11:56 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:43.561 No valid GPT data, bailing 00:03:43.561 15:11:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:43.561 15:11:57 -- scripts/common.sh@391 -- # pt= 00:03:43.561 15:11:57 -- scripts/common.sh@392 -- # return 1 00:03:43.561 15:11:57 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:43.561 1+0 records in 00:03:43.561 1+0 records out 00:03:43.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121644 s, 86.2 MB/s 00:03:43.561 15:11:57 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:43.561 15:11:57 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:43.561 15:11:57 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:43.561 15:11:57 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:43.561 15:11:57 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:43.561 No valid GPT data, bailing 00:03:43.561 15:11:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:43.562 15:11:57 -- scripts/common.sh@391 -- # pt= 00:03:43.562 15:11:57 -- scripts/common.sh@392 -- # return 1 00:03:43.562 15:11:57 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:43.562 1+0 records in 00:03:43.562 1+0 records out 00:03:43.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504281 s, 208 MB/s 00:03:43.562 15:11:57 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:43.562 15:11:57 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:43.562 15:11:57 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:03:43.562 15:11:57 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:03:43.562 15:11:57 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:43.562 No valid GPT data, bailing 00:03:43.562 15:11:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:43.820 15:11:57 -- scripts/common.sh@391 -- # pt= 00:03:43.820 15:11:57 -- scripts/common.sh@392 -- # return 1 00:03:43.820 15:11:57 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:43.820 1+0 records in 00:03:43.820 1+0 records out 00:03:43.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00474562 s, 221 MB/s 00:03:43.820 15:11:57 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:43.820 15:11:57 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:43.820 15:11:57 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:03:43.820 15:11:57 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:03:43.820 15:11:57 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:43.820 No valid GPT data, bailing 00:03:43.821 15:11:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:43.821 15:11:57 -- scripts/common.sh@391 -- # pt= 00:03:43.821 15:11:57 -- scripts/common.sh@392 -- # return 1 00:03:43.821 15:11:57 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:43.821 1+0 records in 00:03:43.821 1+0 records out 00:03:43.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00431117 s, 243 MB/s 00:03:43.821 15:11:57 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:43.821 15:11:57 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:43.821 15:11:57 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:03:43.821 15:11:57 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:03:43.821 15:11:57 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:43.821 No valid GPT data, bailing 00:03:43.821 15:11:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:43.821 15:11:57 -- scripts/common.sh@391 -- # pt= 00:03:43.821 15:11:57 -- scripts/common.sh@392 -- # return 1 00:03:43.821 15:11:57 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:43.821 1+0 records in 00:03:43.821 1+0 records out 00:03:43.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0045129 s, 232 MB/s 00:03:43.821 15:11:57 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:43.821 15:11:57 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:43.821 15:11:57 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:03:43.821 15:11:57 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:03:43.821 15:11:57 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:43.821 No valid GPT data, bailing 00:03:43.821 15:11:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:43.821 15:11:57 -- scripts/common.sh@391 -- # pt= 00:03:43.821 15:11:57 -- scripts/common.sh@392 -- # return 1 00:03:43.821 15:11:57 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:43.821 1+0 records in 00:03:43.821 1+0 records out 00:03:43.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00421783 s, 249 MB/s 00:03:43.821 15:11:57 -- spdk/autotest.sh@118 -- # sync 00:03:44.080 15:11:57 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:44.080 15:11:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:44.080 15:11:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:45.985 15:11:59 -- spdk/autotest.sh@124 -- # uname -s 00:03:45.985 15:11:59 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:45.985 15:11:59 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:45.985 15:11:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.985 15:11:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.985 15:11:59 -- common/autotest_common.sh@10 -- # set +x 00:03:45.985 ************************************ 00:03:45.985 START TEST setup.sh 00:03:45.985 ************************************ 00:03:45.985 15:11:59 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:45.985 * Looking for test storage... 00:03:45.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:45.985 15:11:59 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:45.985 15:11:59 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:45.985 15:11:59 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:45.985 15:11:59 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.985 15:11:59 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.985 15:11:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:45.985 ************************************ 00:03:45.985 START TEST acl 00:03:45.985 ************************************ 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:45.985 * Looking for test storage... 00:03:45.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:45.985 15:11:59 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:45.985 15:11:59 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:03:45.986 15:11:59 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:03:45.986 15:11:59 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:45.986 15:11:59 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:45.986 15:11:59 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:45.986 15:11:59 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:03:45.986 15:11:59 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:03:45.986 15:11:59 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:45.986 15:11:59 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:45.986 15:11:59 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:45.986 15:11:59 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:03:45.986 15:11:59 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:03:45.986 15:11:59 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:45.986 15:11:59 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:45.986 15:11:59 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:45.986 15:11:59 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:45.986 15:11:59 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:45.986 15:11:59 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:45.986 15:11:59 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:45.986 15:11:59 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:45.986 15:11:59 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:47.396 15:12:00 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:47.396 15:12:00 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:47.396 15:12:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.396 15:12:00 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:47.396 15:12:00 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.396 15:12:00 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:47.655 15:12:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:47.655 15:12:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:47.655 15:12:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.913 Hugepages 00:03:47.913 node hugesize free / total 00:03:47.913 15:12:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:47.913 15:12:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:47.913 15:12:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.913 00:03:47.913 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:48.172 15:12:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:48.172 15:12:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:48.172 15:12:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.172 15:12:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:48.172 15:12:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:48.172 15:12:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.172 15:12:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.172 15:12:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:48.172 15:12:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:48.172 15:12:01 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:48.172 15:12:01 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:48.172 15:12:01 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:48.172 15:12:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.172 15:12:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:48.172 15:12:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:48.172 15:12:01 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:48.172 15:12:01 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:48.172 15:12:01 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:48.172 15:12:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.430 15:12:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:03:48.430 15:12:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:48.430 15:12:01 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:03:48.430 15:12:01 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:48.430 15:12:01 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:48.430 15:12:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.430 15:12:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:03:48.430 15:12:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:48.430 15:12:01 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:03:48.430 15:12:01 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:48.430 15:12:01 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:48.430 15:12:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.430 15:12:01 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:03:48.430 15:12:01 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:48.430 15:12:01 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.430 15:12:01 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.430 15:12:01 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:48.430 ************************************ 00:03:48.430 START TEST denied 00:03:48.430 ************************************ 00:03:48.430 15:12:01 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:48.430 15:12:01 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:48.430 15:12:01 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:48.430 15:12:01 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.430 15:12:01 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:48.430 15:12:01 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:49.806 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:49.806 15:12:03 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:49.806 15:12:03 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:49.806 15:12:03 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:49.806 15:12:03 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:49.806 15:12:03 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:49.806 15:12:03 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:49.806 15:12:03 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:49.806 15:12:03 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:49.806 15:12:03 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.806 15:12:03 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:56.386 00:03:56.386 real 0m7.113s 00:03:56.386 user 0m0.874s 00:03:56.386 sys 0m1.272s 00:03:56.386 15:12:09 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.386 15:12:09 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:56.386 ************************************ 00:03:56.386 END TEST denied 00:03:56.386 ************************************ 00:03:56.386 15:12:09 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:56.386 15:12:09 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:56.386 15:12:09 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.386 15:12:09 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.386 15:12:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:56.386 ************************************ 00:03:56.386 START TEST allowed 00:03:56.386 ************************************ 00:03:56.386 15:12:09 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:56.386 15:12:09 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:56.386 15:12:09 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:56.386 15:12:09 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:56.386 15:12:09 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.386 15:12:09 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:56.645 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:56.645 15:12:10 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:56.645 15:12:10 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:56.645 15:12:10 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:56.645 15:12:10 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:56.645 15:12:10 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:56.645 15:12:10 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:56.645 15:12:10 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:56.645 15:12:10 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:56.645 15:12:10 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:03:56.645 15:12:10 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:03:56.645 15:12:10 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:56.645 15:12:10 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:56.645 15:12:10 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:56.645 15:12:10 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:03:56.645 15:12:10 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:03:56.645 15:12:10 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:56.645 15:12:10 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:56.645 15:12:10 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:56.645 15:12:10 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:56.645 15:12:10 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:58.023 00:03:58.023 real 0m2.135s 00:03:58.023 user 0m0.948s 00:03:58.023 sys 0m1.170s 00:03:58.023 15:12:11 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.023 15:12:11 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:58.023 ************************************ 00:03:58.023 END TEST allowed 00:03:58.023 ************************************ 00:03:58.023 15:12:11 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:58.023 ************************************ 00:03:58.023 END TEST acl 00:03:58.023 ************************************ 00:03:58.023 00:03:58.023 real 0m11.852s 00:03:58.023 user 0m3.016s 00:03:58.023 sys 0m3.833s 00:03:58.023 15:12:11 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.023 15:12:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:58.023 15:12:11 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:58.023 15:12:11 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:58.023 15:12:11 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.023 15:12:11 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.023 15:12:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:58.023 ************************************ 00:03:58.023 START TEST hugepages 00:03:58.023 ************************************ 00:03:58.023 15:12:11 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:58.023 * Looking for test storage... 00:03:58.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5812116 kB' 'MemAvailable: 7395788 kB' 'Buffers: 2436 kB' 'Cached: 1796968 kB' 'SwapCached: 0 kB' 'Active: 444504 kB' 'Inactive: 1456908 kB' 'Active(anon): 112520 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 103940 kB' 'Mapped: 48576 kB' 'Shmem: 10512 kB' 'KReclaimable: 63440 kB' 'Slab: 136216 kB' 'SReclaimable: 63440 kB' 'SUnreclaim: 72776 kB' 'KernelStack: 6428 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 335408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.023 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.024 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:58.025 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:58.026 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:58.026 15:12:11 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:58.026 15:12:11 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.026 15:12:11 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.026 15:12:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.026 ************************************ 00:03:58.026 START TEST default_setup 00:03:58.026 ************************************ 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.026 15:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:58.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.164 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:59.165 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:59.165 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:59.165 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7917436 kB' 'MemAvailable: 9500892 kB' 'Buffers: 2436 kB' 'Cached: 1796956 kB' 'SwapCached: 0 kB' 'Active: 461892 kB' 'Inactive: 1456924 kB' 'Active(anon): 129908 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456924 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 121268 kB' 'Mapped: 49020 kB' 'Shmem: 10472 kB' 'KReclaimable: 62976 kB' 'Slab: 135164 kB' 'SReclaimable: 62976 kB' 'SUnreclaim: 72188 kB' 'KernelStack: 6432 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.165 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.166 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7917436 kB' 'MemAvailable: 9500892 kB' 'Buffers: 2436 kB' 'Cached: 1796956 kB' 'SwapCached: 0 kB' 'Active: 461812 kB' 'Inactive: 1456924 kB' 'Active(anon): 129828 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456924 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 120968 kB' 'Mapped: 48840 kB' 'Shmem: 10472 kB' 'KReclaimable: 62976 kB' 'Slab: 135168 kB' 'SReclaimable: 62976 kB' 'SUnreclaim: 72192 kB' 'KernelStack: 6400 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.167 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:59.168 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7917436 kB' 'MemAvailable: 9500892 kB' 'Buffers: 2436 kB' 'Cached: 1796956 kB' 'SwapCached: 0 kB' 'Active: 461844 kB' 'Inactive: 1456924 kB' 'Active(anon): 129860 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456924 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 120996 kB' 'Mapped: 48780 kB' 'Shmem: 10472 kB' 'KReclaimable: 62976 kB' 'Slab: 135172 kB' 'SReclaimable: 62976 kB' 'SUnreclaim: 72196 kB' 'KernelStack: 6368 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.169 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.170 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:59.171 nr_hugepages=1024 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:59.171 resv_hugepages=0 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.171 surplus_hugepages=0 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.171 anon_hugepages=0 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7917436 kB' 'MemAvailable: 9500892 kB' 'Buffers: 2436 kB' 'Cached: 1796956 kB' 'SwapCached: 0 kB' 'Active: 461668 kB' 'Inactive: 1456924 kB' 'Active(anon): 129684 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456924 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 120804 kB' 'Mapped: 48700 kB' 'Shmem: 10472 kB' 'KReclaimable: 62976 kB' 'Slab: 135184 kB' 'SReclaimable: 62976 kB' 'SUnreclaim: 72208 kB' 'KernelStack: 6388 kB' 'PageTables: 3896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.171 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.432 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7917184 kB' 'MemUsed: 4324796 kB' 'SwapCached: 0 kB' 'Active: 461516 kB' 'Inactive: 1456932 kB' 'Active(anon): 129532 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'FilePages: 1799392 kB' 'Mapped: 48576 kB' 'AnonPages: 120680 kB' 'Shmem: 10472 kB' 'KernelStack: 6384 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62976 kB' 'Slab: 135184 kB' 'SReclaimable: 62976 kB' 'SUnreclaim: 72208 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.433 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.434 node0=1024 expecting 1024 00:03:59.434 ************************************ 00:03:59.434 END TEST default_setup 00:03:59.434 ************************************ 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:59.434 00:03:59.434 real 0m1.379s 00:03:59.434 user 0m0.636s 00:03:59.434 sys 0m0.712s 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.434 15:12:12 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:59.434 15:12:12 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:59.434 15:12:12 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:59.434 15:12:12 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.434 15:12:12 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.434 15:12:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.434 ************************************ 00:03:59.434 START TEST per_node_1G_alloc 00:03:59.434 ************************************ 00:03:59.434 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:59.434 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:59.434 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.435 15:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:59.693 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.957 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:59.957 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:59.957 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:59.957 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8962020 kB' 'MemAvailable: 10545488 kB' 'Buffers: 2436 kB' 'Cached: 1796956 kB' 'SwapCached: 0 kB' 'Active: 461824 kB' 'Inactive: 1456936 kB' 'Active(anon): 129840 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 120920 kB' 'Mapped: 48580 kB' 'Shmem: 10472 kB' 'KReclaimable: 62976 kB' 'Slab: 135152 kB' 'SReclaimable: 62976 kB' 'SUnreclaim: 72176 kB' 'KernelStack: 6404 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.957 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.958 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8962020 kB' 'MemAvailable: 10545488 kB' 'Buffers: 2436 kB' 'Cached: 1796956 kB' 'SwapCached: 0 kB' 'Active: 462048 kB' 'Inactive: 1456936 kB' 'Active(anon): 130064 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121144 kB' 'Mapped: 48572 kB' 'Shmem: 10472 kB' 'KReclaimable: 62976 kB' 'Slab: 135180 kB' 'SReclaimable: 62976 kB' 'SUnreclaim: 72204 kB' 'KernelStack: 6432 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.959 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.960 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.961 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8962540 kB' 'MemAvailable: 10546008 kB' 'Buffers: 2436 kB' 'Cached: 1796956 kB' 'SwapCached: 0 kB' 'Active: 461664 kB' 'Inactive: 1456936 kB' 'Active(anon): 129680 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 120836 kB' 'Mapped: 48580 kB' 'Shmem: 10472 kB' 'KReclaimable: 62976 kB' 'Slab: 135180 kB' 'SReclaimable: 62976 kB' 'SUnreclaim: 72204 kB' 'KernelStack: 6400 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.962 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.963 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.964 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:59.965 nr_hugepages=512 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.965 resv_hugepages=0 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.965 surplus_hugepages=0 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.965 anon_hugepages=0 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8962672 kB' 'MemAvailable: 10546140 kB' 'Buffers: 2436 kB' 'Cached: 1796956 kB' 'SwapCached: 0 kB' 'Active: 461704 kB' 'Inactive: 1456936 kB' 'Active(anon): 129720 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 120836 kB' 'Mapped: 48580 kB' 'Shmem: 10472 kB' 'KReclaimable: 62976 kB' 'Slab: 135176 kB' 'SReclaimable: 62976 kB' 'SUnreclaim: 72200 kB' 'KernelStack: 6400 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.965 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.966 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.967 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8962672 kB' 'MemUsed: 3279308 kB' 'SwapCached: 0 kB' 'Active: 461680 kB' 'Inactive: 1456936 kB' 'Active(anon): 129696 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'FilePages: 1799392 kB' 'Mapped: 48580 kB' 'AnonPages: 120840 kB' 'Shmem: 10472 kB' 'KernelStack: 6400 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62976 kB' 'Slab: 135176 kB' 'SReclaimable: 62976 kB' 'SUnreclaim: 72200 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.226 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.227 node0=512 expecting 512 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:00.227 ************************************ 00:04:00.227 END TEST per_node_1G_alloc 00:04:00.227 ************************************ 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:00.227 00:04:00.227 real 0m0.712s 00:04:00.227 user 0m0.331s 00:04:00.227 sys 0m0.391s 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.227 15:12:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.227 15:12:13 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:00.227 15:12:13 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:00.227 15:12:13 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.228 15:12:13 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.228 15:12:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.228 ************************************ 00:04:00.228 START TEST even_2G_alloc 00:04:00.228 ************************************ 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.228 15:12:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:00.486 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.751 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:00.751 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:00.751 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:00.751 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7916640 kB' 'MemAvailable: 9500108 kB' 'Buffers: 2436 kB' 'Cached: 1796956 kB' 'SwapCached: 0 kB' 'Active: 462128 kB' 'Inactive: 1456936 kB' 'Active(anon): 130144 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 121264 kB' 'Mapped: 48728 kB' 'Shmem: 10472 kB' 'KReclaimable: 62976 kB' 'Slab: 135116 kB' 'SReclaimable: 62976 kB' 'SUnreclaim: 72140 kB' 'KernelStack: 6376 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.751 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.752 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7916640 kB' 'MemAvailable: 9500108 kB' 'Buffers: 2436 kB' 'Cached: 1796956 kB' 'SwapCached: 0 kB' 'Active: 461408 kB' 'Inactive: 1456936 kB' 'Active(anon): 129424 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 120524 kB' 'Mapped: 48580 kB' 'Shmem: 10472 kB' 'KReclaimable: 62976 kB' 'Slab: 135152 kB' 'SReclaimable: 62976 kB' 'SUnreclaim: 72176 kB' 'KernelStack: 6384 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.753 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.754 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7916640 kB' 'MemAvailable: 9500108 kB' 'Buffers: 2436 kB' 'Cached: 1796956 kB' 'SwapCached: 0 kB' 'Active: 461396 kB' 'Inactive: 1456936 kB' 'Active(anon): 129412 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 120512 kB' 'Mapped: 48580 kB' 'Shmem: 10472 kB' 'KReclaimable: 62976 kB' 'Slab: 135152 kB' 'SReclaimable: 62976 kB' 'SUnreclaim: 72176 kB' 'KernelStack: 6384 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.755 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.756 nr_hugepages=1024 00:04:00.756 resv_hugepages=0 00:04:00.756 surplus_hugepages=0 00:04:00.756 anon_hugepages=0 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.756 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7916640 kB' 'MemAvailable: 9500108 kB' 'Buffers: 2436 kB' 'Cached: 1796956 kB' 'SwapCached: 0 kB' 'Active: 461636 kB' 'Inactive: 1456936 kB' 'Active(anon): 129652 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 120752 kB' 'Mapped: 48580 kB' 'Shmem: 10472 kB' 'KReclaimable: 62976 kB' 'Slab: 135152 kB' 'SReclaimable: 62976 kB' 'SUnreclaim: 72176 kB' 'KernelStack: 6368 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.757 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:00.758 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7916892 kB' 'MemUsed: 4325088 kB' 'SwapCached: 0 kB' 'Active: 461444 kB' 'Inactive: 1456936 kB' 'Active(anon): 129460 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'FilePages: 1799392 kB' 'Mapped: 48840 kB' 'AnonPages: 120728 kB' 'Shmem: 10472 kB' 'KernelStack: 6416 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62976 kB' 'Slab: 135148 kB' 'SReclaimable: 62976 kB' 'SUnreclaim: 72172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.759 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.760 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.761 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.761 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.761 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.761 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.761 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.761 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.761 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.761 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.761 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.761 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.761 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:00.761 node0=1024 expecting 1024 00:04:00.761 15:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:00.761 ************************************ 00:04:00.761 END TEST even_2G_alloc 00:04:00.761 ************************************ 00:04:00.761 00:04:00.761 real 0m0.674s 00:04:00.761 user 0m0.324s 00:04:00.761 sys 0m0.372s 00:04:00.761 15:12:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.761 15:12:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.761 15:12:14 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:00.761 15:12:14 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:00.761 15:12:14 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.761 15:12:14 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.761 15:12:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.020 ************************************ 00:04:01.020 START TEST odd_alloc 00:04:01.020 ************************************ 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.020 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.280 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.280 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.280 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.280 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.280 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7914644 kB' 'MemAvailable: 9498112 kB' 'Buffers: 2436 kB' 'Cached: 1796956 kB' 'SwapCached: 0 kB' 'Active: 462064 kB' 'Inactive: 1456936 kB' 'Active(anon): 130080 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121176 kB' 'Mapped: 48748 kB' 'Shmem: 10472 kB' 'KReclaimable: 62976 kB' 'Slab: 135180 kB' 'SReclaimable: 62976 kB' 'SUnreclaim: 72204 kB' 'KernelStack: 6432 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.280 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.281 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.545 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7914896 kB' 'MemAvailable: 9498364 kB' 'Buffers: 2436 kB' 'Cached: 1796956 kB' 'SwapCached: 0 kB' 'Active: 461724 kB' 'Inactive: 1456936 kB' 'Active(anon): 129740 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 120856 kB' 'Mapped: 48584 kB' 'Shmem: 10472 kB' 'KReclaimable: 62976 kB' 'Slab: 135168 kB' 'SReclaimable: 62976 kB' 'SUnreclaim: 72192 kB' 'KernelStack: 6400 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.546 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7919368 kB' 'MemAvailable: 9502836 kB' 'Buffers: 2436 kB' 'Cached: 1796956 kB' 'SwapCached: 0 kB' 'Active: 461700 kB' 'Inactive: 1456936 kB' 'Active(anon): 129716 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 120820 kB' 'Mapped: 48584 kB' 'Shmem: 10472 kB' 'KReclaimable: 62976 kB' 'Slab: 135156 kB' 'SReclaimable: 62976 kB' 'SUnreclaim: 72180 kB' 'KernelStack: 6384 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.547 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.548 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.549 nr_hugepages=1025 00:04:01.549 resv_hugepages=0 00:04:01.549 surplus_hugepages=0 00:04:01.549 anon_hugepages=0 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7919368 kB' 'MemAvailable: 9502836 kB' 'Buffers: 2436 kB' 'Cached: 1796956 kB' 'SwapCached: 0 kB' 'Active: 461448 kB' 'Inactive: 1456936 kB' 'Active(anon): 129464 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 120596 kB' 'Mapped: 48584 kB' 'Shmem: 10472 kB' 'KReclaimable: 62976 kB' 'Slab: 135136 kB' 'SReclaimable: 62976 kB' 'SUnreclaim: 72160 kB' 'KernelStack: 6400 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.549 15:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.549 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.549 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.549 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.549 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.549 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.549 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.550 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7917868 kB' 'MemUsed: 4324112 kB' 'SwapCached: 0 kB' 'Active: 461628 kB' 'Inactive: 1456936 kB' 'Active(anon): 129644 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'FilePages: 1799392 kB' 'Mapped: 48584 kB' 'AnonPages: 120740 kB' 'Shmem: 10472 kB' 'KernelStack: 6384 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62992 kB' 'Slab: 135148 kB' 'SReclaimable: 62992 kB' 'SUnreclaim: 72156 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.551 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:01.552 node0=1025 expecting 1025 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:01.552 00:04:01.552 real 0m0.687s 00:04:01.552 user 0m0.313s 00:04:01.552 sys 0m0.388s 00:04:01.552 ************************************ 00:04:01.552 END TEST odd_alloc 00:04:01.552 ************************************ 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.552 15:12:15 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:01.552 15:12:15 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:01.552 15:12:15 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:01.552 15:12:15 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.552 15:12:15 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.552 15:12:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.552 ************************************ 00:04:01.552 START TEST custom_alloc 00:04:01.552 ************************************ 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:01.552 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:01.553 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:01.553 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.553 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.124 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.124 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.124 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.124 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.124 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8963368 kB' 'MemAvailable: 10546848 kB' 'Buffers: 2436 kB' 'Cached: 1796960 kB' 'SwapCached: 0 kB' 'Active: 461996 kB' 'Inactive: 1456940 kB' 'Active(anon): 130012 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121112 kB' 'Mapped: 48812 kB' 'Shmem: 10472 kB' 'KReclaimable: 62992 kB' 'Slab: 135140 kB' 'SReclaimable: 62992 kB' 'SUnreclaim: 72148 kB' 'KernelStack: 6440 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.124 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.125 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8963368 kB' 'MemAvailable: 10546848 kB' 'Buffers: 2436 kB' 'Cached: 1796960 kB' 'SwapCached: 0 kB' 'Active: 461716 kB' 'Inactive: 1456940 kB' 'Active(anon): 129732 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 120884 kB' 'Mapped: 48580 kB' 'Shmem: 10472 kB' 'KReclaimable: 62992 kB' 'Slab: 135160 kB' 'SReclaimable: 62992 kB' 'SUnreclaim: 72168 kB' 'KernelStack: 6400 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.126 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8963368 kB' 'MemAvailable: 10546848 kB' 'Buffers: 2436 kB' 'Cached: 1796960 kB' 'SwapCached: 0 kB' 'Active: 461912 kB' 'Inactive: 1456940 kB' 'Active(anon): 129928 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121024 kB' 'Mapped: 48580 kB' 'Shmem: 10472 kB' 'KReclaimable: 62992 kB' 'Slab: 135156 kB' 'SReclaimable: 62992 kB' 'SUnreclaim: 72164 kB' 'KernelStack: 6384 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.127 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.128 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.129 nr_hugepages=512 00:04:02.129 resv_hugepages=0 00:04:02.129 surplus_hugepages=0 00:04:02.129 anon_hugepages=0 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8963368 kB' 'MemAvailable: 10546848 kB' 'Buffers: 2436 kB' 'Cached: 1796960 kB' 'SwapCached: 0 kB' 'Active: 461760 kB' 'Inactive: 1456940 kB' 'Active(anon): 129776 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 120912 kB' 'Mapped: 48580 kB' 'Shmem: 10472 kB' 'KReclaimable: 62992 kB' 'Slab: 135156 kB' 'SReclaimable: 62992 kB' 'SUnreclaim: 72164 kB' 'KernelStack: 6400 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.129 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.130 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.391 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8964148 kB' 'MemUsed: 3277832 kB' 'SwapCached: 0 kB' 'Active: 461824 kB' 'Inactive: 1456940 kB' 'Active(anon): 129840 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1799396 kB' 'Mapped: 48580 kB' 'AnonPages: 121080 kB' 'Shmem: 10472 kB' 'KernelStack: 6416 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62992 kB' 'Slab: 135148 kB' 'SReclaimable: 62992 kB' 'SUnreclaim: 72156 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.392 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.393 node0=512 expecting 512 00:04:02.393 ************************************ 00:04:02.393 END TEST custom_alloc 00:04:02.393 ************************************ 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:02.393 00:04:02.393 real 0m0.679s 00:04:02.393 user 0m0.308s 00:04:02.393 sys 0m0.384s 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.393 15:12:15 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:02.393 15:12:15 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:02.393 15:12:15 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:02.393 15:12:15 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.393 15:12:15 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.393 15:12:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.393 ************************************ 00:04:02.393 START TEST no_shrink_alloc 00:04:02.393 ************************************ 00:04:02.393 15:12:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:02.393 15:12:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:02.393 15:12:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:02.393 15:12:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:02.393 15:12:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:02.393 15:12:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:02.393 15:12:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:02.393 15:12:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.393 15:12:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:02.393 15:12:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:02.393 15:12:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:02.394 15:12:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.394 15:12:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.394 15:12:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:02.394 15:12:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.394 15:12:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.394 15:12:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:02.394 15:12:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:02.394 15:12:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:02.394 15:12:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:02.394 15:12:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:02.394 15:12:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.394 15:12:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.653 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.917 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.917 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.917 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.917 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7916992 kB' 'MemAvailable: 9500468 kB' 'Buffers: 2436 kB' 'Cached: 1796960 kB' 'SwapCached: 0 kB' 'Active: 459428 kB' 'Inactive: 1456940 kB' 'Active(anon): 127444 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 118556 kB' 'Mapped: 48116 kB' 'Shmem: 10472 kB' 'KReclaimable: 62984 kB' 'Slab: 135064 kB' 'SReclaimable: 62984 kB' 'SUnreclaim: 72080 kB' 'KernelStack: 6424 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 347692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.917 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.918 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7916992 kB' 'MemAvailable: 9500468 kB' 'Buffers: 2436 kB' 'Cached: 1796960 kB' 'SwapCached: 0 kB' 'Active: 459180 kB' 'Inactive: 1456940 kB' 'Active(anon): 127196 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 118340 kB' 'Mapped: 47832 kB' 'Shmem: 10472 kB' 'KReclaimable: 62984 kB' 'Slab: 135064 kB' 'SReclaimable: 62984 kB' 'SUnreclaim: 72080 kB' 'KernelStack: 6304 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.919 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7916992 kB' 'MemAvailable: 9500468 kB' 'Buffers: 2436 kB' 'Cached: 1796960 kB' 'SwapCached: 0 kB' 'Active: 459128 kB' 'Inactive: 1456940 kB' 'Active(anon): 127144 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 118292 kB' 'Mapped: 47840 kB' 'Shmem: 10472 kB' 'KReclaimable: 62984 kB' 'Slab: 135060 kB' 'SReclaimable: 62984 kB' 'SUnreclaim: 72076 kB' 'KernelStack: 6336 kB' 'PageTables: 3832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.920 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.921 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.922 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.923 nr_hugepages=1024 00:04:02.923 resv_hugepages=0 00:04:02.923 surplus_hugepages=0 00:04:02.923 anon_hugepages=0 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7916992 kB' 'MemAvailable: 9500468 kB' 'Buffers: 2436 kB' 'Cached: 1796960 kB' 'SwapCached: 0 kB' 'Active: 459040 kB' 'Inactive: 1456940 kB' 'Active(anon): 127056 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 118200 kB' 'Mapped: 47840 kB' 'Shmem: 10472 kB' 'KReclaimable: 62984 kB' 'Slab: 135060 kB' 'SReclaimable: 62984 kB' 'SUnreclaim: 72076 kB' 'KernelStack: 6320 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.923 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.924 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7917400 kB' 'MemUsed: 4324580 kB' 'SwapCached: 0 kB' 'Active: 459024 kB' 'Inactive: 1456940 kB' 'Active(anon): 127040 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1799396 kB' 'Mapped: 47840 kB' 'AnonPages: 118176 kB' 'Shmem: 10472 kB' 'KernelStack: 6320 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62984 kB' 'Slab: 135060 kB' 'SReclaimable: 62984 kB' 'SUnreclaim: 72076 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.925 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.926 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.186 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.186 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.186 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.186 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.186 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.186 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.186 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:03.186 node0=1024 expecting 1024 00:04:03.186 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:03.186 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:03.186 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:03.186 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:03.186 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.186 15:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.446 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.446 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.446 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.446 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.446 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:03.446 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7917880 kB' 'MemAvailable: 9501356 kB' 'Buffers: 2436 kB' 'Cached: 1796960 kB' 'SwapCached: 0 kB' 'Active: 459972 kB' 'Inactive: 1456940 kB' 'Active(anon): 127988 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 119120 kB' 'Mapped: 47824 kB' 'Shmem: 10472 kB' 'KReclaimable: 62984 kB' 'Slab: 135052 kB' 'SReclaimable: 62984 kB' 'SUnreclaim: 72068 kB' 'KernelStack: 6408 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.446 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.710 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7918220 kB' 'MemAvailable: 9501696 kB' 'Buffers: 2436 kB' 'Cached: 1796960 kB' 'SwapCached: 0 kB' 'Active: 459172 kB' 'Inactive: 1456940 kB' 'Active(anon): 127188 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118292 kB' 'Mapped: 47840 kB' 'Shmem: 10472 kB' 'KReclaimable: 62984 kB' 'Slab: 135052 kB' 'SReclaimable: 62984 kB' 'SUnreclaim: 72068 kB' 'KernelStack: 6336 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.711 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.712 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7918220 kB' 'MemAvailable: 9501696 kB' 'Buffers: 2436 kB' 'Cached: 1796960 kB' 'SwapCached: 0 kB' 'Active: 459320 kB' 'Inactive: 1456940 kB' 'Active(anon): 127336 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118432 kB' 'Mapped: 47840 kB' 'Shmem: 10472 kB' 'KReclaimable: 62984 kB' 'Slab: 135052 kB' 'SReclaimable: 62984 kB' 'SUnreclaim: 72068 kB' 'KernelStack: 6320 kB' 'PageTables: 3784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.713 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.714 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.714 nr_hugepages=1024 00:04:03.714 resv_hugepages=0 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.715 surplus_hugepages=0 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.715 anon_hugepages=0 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7918220 kB' 'MemAvailable: 9501696 kB' 'Buffers: 2436 kB' 'Cached: 1796960 kB' 'SwapCached: 0 kB' 'Active: 459180 kB' 'Inactive: 1456940 kB' 'Active(anon): 127196 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118304 kB' 'Mapped: 47840 kB' 'Shmem: 10472 kB' 'KReclaimable: 62984 kB' 'Slab: 135052 kB' 'SReclaimable: 62984 kB' 'SUnreclaim: 72068 kB' 'KernelStack: 6336 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 6113280 kB' 'DirectMap1G: 8388608 kB' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.715 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.716 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7918220 kB' 'MemUsed: 4323760 kB' 'SwapCached: 0 kB' 'Active: 459136 kB' 'Inactive: 1456940 kB' 'Active(anon): 127152 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1456940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1799396 kB' 'Mapped: 47840 kB' 'AnonPages: 118296 kB' 'Shmem: 10472 kB' 'KernelStack: 6336 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62984 kB' 'Slab: 135052 kB' 'SReclaimable: 62984 kB' 'SUnreclaim: 72068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.717 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.718 node0=1024 expecting 1024 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:03.718 ************************************ 00:04:03.718 END TEST no_shrink_alloc 00:04:03.718 ************************************ 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:03.718 00:04:03.718 real 0m1.418s 00:04:03.718 user 0m0.678s 00:04:03.718 sys 0m0.772s 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.718 15:12:17 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:03.718 15:12:17 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:03.718 15:12:17 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:03.718 15:12:17 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:03.718 15:12:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:03.718 15:12:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.718 15:12:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:03.718 15:12:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.718 15:12:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:03.718 15:12:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:03.718 15:12:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:03.718 ************************************ 00:04:03.718 END TEST hugepages 00:04:03.718 ************************************ 00:04:03.718 00:04:03.718 real 0m6.001s 00:04:03.718 user 0m2.738s 00:04:03.718 sys 0m3.275s 00:04:03.718 15:12:17 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.718 15:12:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.976 15:12:17 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:03.976 15:12:17 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:03.977 15:12:17 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.977 15:12:17 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.977 15:12:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:03.977 ************************************ 00:04:03.977 START TEST driver 00:04:03.977 ************************************ 00:04:03.977 15:12:17 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:03.977 * Looking for test storage... 00:04:03.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:03.977 15:12:17 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:03.977 15:12:17 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:03.977 15:12:17 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:10.553 15:12:23 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:10.553 15:12:23 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.553 15:12:23 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.553 15:12:23 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:10.553 ************************************ 00:04:10.553 START TEST guess_driver 00:04:10.553 ************************************ 00:04:10.553 15:12:23 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:10.553 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:10.553 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:10.553 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:10.554 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:10.554 Looking for driver=uio_pci_generic 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:10.554 15:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.132 15:12:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.132 15:12:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:11.132 15:12:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.132 15:12:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.132 15:12:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:11.132 15:12:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.132 15:12:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.132 15:12:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:11.132 15:12:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.132 15:12:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.132 15:12:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:11.132 15:12:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.132 15:12:24 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:11.132 15:12:24 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:11.132 15:12:24 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:11.132 15:12:24 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:17.697 00:04:17.697 real 0m7.167s 00:04:17.697 user 0m0.800s 00:04:17.697 sys 0m1.433s 00:04:17.697 15:12:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.697 ************************************ 00:04:17.697 END TEST guess_driver 00:04:17.697 ************************************ 00:04:17.697 15:12:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:17.697 15:12:30 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:17.697 ************************************ 00:04:17.697 END TEST driver 00:04:17.697 ************************************ 00:04:17.697 00:04:17.697 real 0m13.208s 00:04:17.697 user 0m1.155s 00:04:17.697 sys 0m2.226s 00:04:17.697 15:12:30 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.697 15:12:30 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:17.697 15:12:30 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:17.697 15:12:30 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:17.697 15:12:30 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.697 15:12:30 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.697 15:12:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:17.697 ************************************ 00:04:17.697 START TEST devices 00:04:17.697 ************************************ 00:04:17.697 15:12:30 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:17.697 * Looking for test storage... 00:04:17.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:17.697 15:12:30 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:17.697 15:12:30 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:17.697 15:12:30 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:17.697 15:12:30 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:18.265 15:12:31 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:18.265 15:12:31 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.265 15:12:31 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:18.265 15:12:31 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:18.265 15:12:31 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:18.265 15:12:31 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:18.265 15:12:31 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:18.265 15:12:31 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:18.265 15:12:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:18.265 15:12:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:18.265 15:12:31 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:18.266 15:12:31 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:18.266 15:12:31 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:18.266 15:12:31 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:18.266 15:12:31 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:18.266 No valid GPT data, bailing 00:04:18.266 15:12:31 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:18.525 15:12:31 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:18.525 15:12:31 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:18.525 15:12:31 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:18.525 15:12:31 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:18.525 15:12:31 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:18.525 15:12:31 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:18.525 15:12:31 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:18.525 15:12:31 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:18.525 15:12:31 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:18.525 15:12:31 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:18.525 15:12:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:18.525 15:12:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:18.525 15:12:31 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:18.525 15:12:31 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:18.525 15:12:31 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:18.525 15:12:31 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:18.525 15:12:31 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:18.525 No valid GPT data, bailing 00:04:18.525 15:12:31 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:18.525 15:12:31 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:18.525 15:12:31 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:18.525 15:12:31 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:18.525 15:12:31 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:18.525 15:12:31 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:18.525 15:12:31 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:04:18.525 15:12:31 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:18.525 15:12:31 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:18.525 15:12:31 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:18.525 15:12:31 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:18.525 15:12:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:18.525 15:12:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:18.525 15:12:31 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:18.525 15:12:31 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:18.525 15:12:31 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:18.525 15:12:31 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:04:18.525 15:12:31 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:18.525 No valid GPT data, bailing 00:04:18.525 15:12:32 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:18.525 15:12:32 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:18.525 15:12:32 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:18.525 15:12:32 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:18.525 15:12:32 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:18.525 15:12:32 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:18.525 15:12:32 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:18.525 15:12:32 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:18.525 15:12:32 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:18.525 15:12:32 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:18.525 15:12:32 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:18.525 15:12:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:04:18.525 15:12:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:18.525 15:12:32 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:18.525 15:12:32 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:18.525 15:12:32 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:04:18.525 15:12:32 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:04:18.525 15:12:32 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:04:18.525 No valid GPT data, bailing 00:04:18.525 15:12:32 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:18.525 15:12:32 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:18.525 15:12:32 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:18.525 15:12:32 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:04:18.525 15:12:32 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:04:18.525 15:12:32 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:04:18.525 15:12:32 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:18.525 15:12:32 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:18.525 15:12:32 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:18.525 15:12:32 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:18.525 15:12:32 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:18.785 15:12:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:04:18.785 15:12:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:18.785 15:12:32 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:18.785 15:12:32 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:18.785 15:12:32 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:04:18.785 15:12:32 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:04:18.785 15:12:32 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:04:18.785 No valid GPT data, bailing 00:04:18.785 15:12:32 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:18.785 15:12:32 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:18.785 15:12:32 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:18.785 15:12:32 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:04:18.785 15:12:32 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:04:18.785 15:12:32 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:04:18.785 15:12:32 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:18.785 15:12:32 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:18.785 15:12:32 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:18.785 15:12:32 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:18.785 15:12:32 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:18.785 15:12:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:18.785 15:12:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:18.785 15:12:32 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:04:18.785 15:12:32 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:04:18.785 15:12:32 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:18.785 15:12:32 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:04:18.785 15:12:32 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:18.785 No valid GPT data, bailing 00:04:18.785 15:12:32 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:18.785 15:12:32 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:18.785 15:12:32 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:18.785 15:12:32 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:18.785 15:12:32 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:18.785 15:12:32 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:18.785 15:12:32 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:04:18.786 15:12:32 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:18.786 15:12:32 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:18.786 15:12:32 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:18.786 15:12:32 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:18.786 15:12:32 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.786 15:12:32 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.786 15:12:32 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:18.786 ************************************ 00:04:18.786 START TEST nvme_mount 00:04:18.786 ************************************ 00:04:18.786 15:12:32 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:18.786 15:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:18.786 15:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:18.786 15:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:18.786 15:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:18.786 15:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:18.786 15:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:18.786 15:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:18.786 15:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:18.786 15:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:18.786 15:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:18.786 15:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:18.786 15:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:18.786 15:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:18.786 15:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:18.786 15:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:18.786 15:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:18.786 15:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:18.786 15:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:18.786 15:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:19.724 Creating new GPT entries in memory. 00:04:19.724 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:19.724 other utilities. 00:04:19.724 15:12:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:19.724 15:12:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:19.724 15:12:33 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:19.724 15:12:33 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:19.724 15:12:33 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:21.100 Creating new GPT entries in memory. 00:04:21.100 The operation has completed successfully. 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59442 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:21.100 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.359 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:21.359 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.359 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:21.359 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.359 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:21.359 15:12:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.618 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:21.618 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.877 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:21.877 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:21.877 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.877 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:21.877 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:21.878 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:21.878 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.878 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:21.878 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:21.878 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:21.878 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:21.878 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:21.878 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:22.137 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:22.137 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:22.137 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:22.137 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:22.137 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:22.137 15:12:35 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:22.137 15:12:35 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.137 15:12:35 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:22.137 15:12:35 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:22.137 15:12:35 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.137 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:22.137 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:22.137 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:22.137 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:22.137 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:22.137 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:22.137 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:22.137 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:22.137 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:22.137 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.137 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:22.137 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:22.137 15:12:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.137 15:12:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:22.396 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:22.396 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:22.396 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:22.396 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.396 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:22.396 15:12:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.655 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:22.655 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.655 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:22.655 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.655 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:22.655 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.914 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:22.914 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.173 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.173 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:23.173 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:23.173 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:23.173 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:23.173 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:23.173 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:23.173 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:23.173 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:23.173 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:23.173 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:23.173 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:23.173 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:23.173 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:23.173 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.173 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:23.173 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:23.173 15:12:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.173 15:12:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:23.432 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.432 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:23.432 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:23.432 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.432 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.432 15:12:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.690 15:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.690 15:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.690 15:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.690 15:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.690 15:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.690 15:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.949 15:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:23.949 15:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.209 15:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.209 15:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:24.209 15:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:24.209 15:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:24.209 15:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.209 15:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:24.209 15:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:24.209 15:12:37 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:24.209 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:24.209 ************************************ 00:04:24.209 END TEST nvme_mount 00:04:24.209 ************************************ 00:04:24.209 00:04:24.209 real 0m5.373s 00:04:24.209 user 0m1.480s 00:04:24.209 sys 0m1.571s 00:04:24.209 15:12:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.209 15:12:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:24.209 15:12:37 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:24.209 15:12:37 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:24.209 15:12:37 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.209 15:12:37 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.209 15:12:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:24.209 ************************************ 00:04:24.209 START TEST dm_mount 00:04:24.209 ************************************ 00:04:24.209 15:12:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:24.209 15:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:24.209 15:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:24.209 15:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:24.209 15:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:24.209 15:12:37 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:24.209 15:12:37 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:24.209 15:12:37 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:24.209 15:12:37 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:24.209 15:12:37 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:24.209 15:12:37 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:24.209 15:12:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:24.209 15:12:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.209 15:12:37 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:24.209 15:12:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:24.209 15:12:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.209 15:12:37 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:24.209 15:12:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:24.209 15:12:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.209 15:12:37 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:24.209 15:12:37 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:24.209 15:12:37 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:25.145 Creating new GPT entries in memory. 00:04:25.145 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:25.145 other utilities. 00:04:25.145 15:12:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:25.145 15:12:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.145 15:12:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:25.145 15:12:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:25.145 15:12:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:26.526 Creating new GPT entries in memory. 00:04:26.526 The operation has completed successfully. 00:04:26.526 15:12:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:26.526 15:12:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.526 15:12:39 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:26.526 15:12:39 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:26.526 15:12:39 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:27.483 The operation has completed successfully. 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 60068 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.483 15:12:40 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:27.741 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.741 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:27.741 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:27.741 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.741 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.742 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.742 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.742 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.000 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:28.000 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.000 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:28.000 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.259 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:28.259 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.518 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.518 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:28.518 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:28.518 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:28.518 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:28.518 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:28.518 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:28.518 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:28.518 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:28.518 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:28.518 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:28.518 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:28.518 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:28.518 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:28.518 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.518 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:28.518 15:12:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:28.518 15:12:41 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.518 15:12:41 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:28.777 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:28.777 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:28.777 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:28.777 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.777 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:28.777 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.777 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:28.777 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.777 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:28.777 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.777 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:28.777 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.346 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:29.346 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.346 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.346 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:29.346 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:29.346 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:29.346 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:29.346 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:29.346 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:29.346 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:29.346 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:29.346 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:29.346 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:29.346 15:12:42 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:29.346 00:04:29.346 real 0m5.183s 00:04:29.346 user 0m0.995s 00:04:29.346 sys 0m1.105s 00:04:29.346 15:12:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.346 ************************************ 00:04:29.346 END TEST dm_mount 00:04:29.346 ************************************ 00:04:29.346 15:12:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:29.346 15:12:42 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:29.346 15:12:42 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:29.346 15:12:42 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:29.346 15:12:42 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.604 15:12:42 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:29.604 15:12:42 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:29.604 15:12:42 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:29.604 15:12:42 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:29.862 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:29.862 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:29.862 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:29.862 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:29.862 15:12:43 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:29.862 15:12:43 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:29.862 15:12:43 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:29.862 15:12:43 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:29.862 15:12:43 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:29.862 15:12:43 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:29.862 15:12:43 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:29.862 00:04:29.862 real 0m12.641s 00:04:29.862 user 0m3.404s 00:04:29.862 sys 0m3.521s 00:04:29.862 15:12:43 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.862 ************************************ 00:04:29.862 END TEST devices 00:04:29.862 ************************************ 00:04:29.862 15:12:43 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:29.862 15:12:43 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:29.862 00:04:29.862 real 0m43.996s 00:04:29.862 user 0m10.413s 00:04:29.862 sys 0m13.036s 00:04:29.862 15:12:43 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.862 15:12:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:29.862 ************************************ 00:04:29.862 END TEST setup.sh 00:04:29.862 ************************************ 00:04:29.862 15:12:43 -- common/autotest_common.sh@1142 -- # return 0 00:04:29.862 15:12:43 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:30.428 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.686 Hugepages 00:04:30.686 node hugesize free / total 00:04:30.686 node0 1048576kB 0 / 0 00:04:30.686 node0 2048kB 2048 / 2048 00:04:30.686 00:04:30.686 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:30.944 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:30.944 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:30.944 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:31.202 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:31.202 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:31.202 15:12:44 -- spdk/autotest.sh@130 -- # uname -s 00:04:31.202 15:12:44 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:31.202 15:12:44 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:31.202 15:12:44 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:31.770 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.337 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.337 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.337 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.337 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.337 15:12:45 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:33.715 15:12:46 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:33.715 15:12:46 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:33.715 15:12:46 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:33.715 15:12:46 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:33.715 15:12:46 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:33.715 15:12:46 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:33.715 15:12:46 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:33.715 15:12:46 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:33.715 15:12:46 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:33.715 15:12:47 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:04:33.715 15:12:47 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:33.715 15:12:47 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:33.974 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:33.974 Waiting for block devices as requested 00:04:33.974 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:34.232 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:34.232 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:34.491 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:39.757 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:39.757 15:12:52 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:39.757 15:12:52 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:39.757 15:12:52 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:39.757 15:12:52 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:39.757 15:12:52 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:39.757 15:12:52 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:39.757 15:12:52 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:39.757 15:12:52 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:39.757 15:12:52 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:39.757 15:12:52 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:39.757 15:12:52 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:39.757 15:12:52 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:39.757 15:12:52 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:39.757 15:12:52 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:39.757 15:12:52 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:39.757 15:12:52 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:39.757 15:12:52 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:39.757 15:12:52 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:39.757 15:12:52 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:39.757 15:12:52 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:39.757 15:12:52 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:39.757 15:12:52 -- common/autotest_common.sh@1557 -- # continue 00:04:39.757 15:12:52 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:39.757 15:12:52 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:39.757 15:12:52 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:39.757 15:12:52 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:39.757 15:12:52 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:39.757 15:12:52 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:39.757 15:12:52 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:39.757 15:12:52 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:39.757 15:12:52 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:39.757 15:12:52 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:39.757 15:12:52 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:39.757 15:12:52 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:39.757 15:12:52 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:39.757 15:12:52 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:39.757 15:12:52 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:39.757 15:12:52 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:39.757 15:12:52 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:39.757 15:12:52 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:39.757 15:12:52 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:39.757 15:12:53 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:39.757 15:12:53 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:39.757 15:12:53 -- common/autotest_common.sh@1557 -- # continue 00:04:39.757 15:12:53 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:39.757 15:12:53 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:39.757 15:12:53 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:04:39.757 15:12:53 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:39.757 15:12:53 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:39.757 15:12:53 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:39.757 15:12:53 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:39.757 15:12:53 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:04:39.757 15:12:53 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:04:39.757 15:12:53 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:04:39.757 15:12:53 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:04:39.757 15:12:53 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:39.757 15:12:53 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:39.757 15:12:53 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:39.757 15:12:53 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:39.757 15:12:53 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:39.757 15:12:53 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:39.757 15:12:53 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:04:39.757 15:12:53 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:39.757 15:12:53 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:39.757 15:12:53 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:39.757 15:12:53 -- common/autotest_common.sh@1557 -- # continue 00:04:39.757 15:12:53 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:39.757 15:12:53 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:39.757 15:12:53 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:04:39.757 15:12:53 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:39.757 15:12:53 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:39.757 15:12:53 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:39.757 15:12:53 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:39.757 15:12:53 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:04:39.757 15:12:53 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:04:39.757 15:12:53 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:04:39.757 15:12:53 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:04:39.757 15:12:53 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:39.757 15:12:53 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:39.757 15:12:53 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:39.757 15:12:53 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:39.757 15:12:53 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:39.757 15:12:53 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:39.757 15:12:53 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:04:39.757 15:12:53 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:39.757 15:12:53 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:39.757 15:12:53 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:39.757 15:12:53 -- common/autotest_common.sh@1557 -- # continue 00:04:39.757 15:12:53 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:39.757 15:12:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:39.757 15:12:53 -- common/autotest_common.sh@10 -- # set +x 00:04:39.757 15:12:53 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:39.757 15:12:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:39.757 15:12:53 -- common/autotest_common.sh@10 -- # set +x 00:04:39.757 15:12:53 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:40.349 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.607 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.607 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.867 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.867 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.867 15:12:54 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:40.867 15:12:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:40.867 15:12:54 -- common/autotest_common.sh@10 -- # set +x 00:04:40.867 15:12:54 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:40.867 15:12:54 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:40.867 15:12:54 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:40.867 15:12:54 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:40.867 15:12:54 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:40.867 15:12:54 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:40.867 15:12:54 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:40.867 15:12:54 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:40.867 15:12:54 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:40.867 15:12:54 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:40.867 15:12:54 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:41.127 15:12:54 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:04:41.127 15:12:54 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:41.127 15:12:54 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:41.127 15:12:54 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:41.127 15:12:54 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:41.127 15:12:54 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:41.127 15:12:54 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:41.127 15:12:54 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:41.127 15:12:54 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:41.127 15:12:54 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:41.127 15:12:54 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:41.127 15:12:54 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:41.127 15:12:54 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:41.127 15:12:54 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:41.127 15:12:54 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:41.127 15:12:54 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:41.127 15:12:54 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:41.127 15:12:54 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:41.127 15:12:54 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:41.127 15:12:54 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:41.127 15:12:54 -- common/autotest_common.sh@1593 -- # return 0 00:04:41.127 15:12:54 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:41.127 15:12:54 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:41.127 15:12:54 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:41.127 15:12:54 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:41.127 15:12:54 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:41.127 15:12:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:41.127 15:12:54 -- common/autotest_common.sh@10 -- # set +x 00:04:41.127 15:12:54 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:41.127 15:12:54 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:41.127 15:12:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.127 15:12:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.127 15:12:54 -- common/autotest_common.sh@10 -- # set +x 00:04:41.127 ************************************ 00:04:41.127 START TEST env 00:04:41.127 ************************************ 00:04:41.127 15:12:54 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:41.127 * Looking for test storage... 00:04:41.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:41.127 15:12:54 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:41.127 15:12:54 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.127 15:12:54 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.127 15:12:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.127 ************************************ 00:04:41.127 START TEST env_memory 00:04:41.127 ************************************ 00:04:41.127 15:12:54 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:41.127 00:04:41.127 00:04:41.127 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.127 http://cunit.sourceforge.net/ 00:04:41.127 00:04:41.127 00:04:41.127 Suite: memory 00:04:41.127 Test: alloc and free memory map ...[2024-07-11 15:12:54.702493] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:41.386 passed 00:04:41.386 Test: mem map translation ...[2024-07-11 15:12:54.768230] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:41.386 [2024-07-11 15:12:54.768492] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:41.386 [2024-07-11 15:12:54.768749] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:41.386 [2024-07-11 15:12:54.768785] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:41.386 passed 00:04:41.386 Test: mem map registration ...[2024-07-11 15:12:54.872460] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:41.386 [2024-07-11 15:12:54.872538] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:41.386 passed 00:04:41.645 Test: mem map adjacent registrations ...passed 00:04:41.645 00:04:41.646 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.646 suites 1 1 n/a 0 0 00:04:41.646 tests 4 4 4 0 0 00:04:41.646 asserts 152 152 152 0 n/a 00:04:41.646 00:04:41.646 Elapsed time = 0.357 seconds 00:04:41.646 00:04:41.646 real 0m0.399s 00:04:41.646 user 0m0.363s 00:04:41.646 sys 0m0.027s 00:04:41.646 15:12:55 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.646 15:12:55 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:41.646 ************************************ 00:04:41.646 END TEST env_memory 00:04:41.646 ************************************ 00:04:41.646 15:12:55 env -- common/autotest_common.sh@1142 -- # return 0 00:04:41.646 15:12:55 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:41.646 15:12:55 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.646 15:12:55 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.646 15:12:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.646 ************************************ 00:04:41.646 START TEST env_vtophys 00:04:41.646 ************************************ 00:04:41.646 15:12:55 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:41.646 EAL: lib.eal log level changed from notice to debug 00:04:41.646 EAL: Detected lcore 0 as core 0 on socket 0 00:04:41.646 EAL: Detected lcore 1 as core 0 on socket 0 00:04:41.646 EAL: Detected lcore 2 as core 0 on socket 0 00:04:41.646 EAL: Detected lcore 3 as core 0 on socket 0 00:04:41.646 EAL: Detected lcore 4 as core 0 on socket 0 00:04:41.646 EAL: Detected lcore 5 as core 0 on socket 0 00:04:41.646 EAL: Detected lcore 6 as core 0 on socket 0 00:04:41.646 EAL: Detected lcore 7 as core 0 on socket 0 00:04:41.646 EAL: Detected lcore 8 as core 0 on socket 0 00:04:41.646 EAL: Detected lcore 9 as core 0 on socket 0 00:04:41.646 EAL: Maximum logical cores by configuration: 128 00:04:41.646 EAL: Detected CPU lcores: 10 00:04:41.646 EAL: Detected NUMA nodes: 1 00:04:41.646 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:41.646 EAL: Detected shared linkage of DPDK 00:04:41.646 EAL: No shared files mode enabled, IPC will be disabled 00:04:41.646 EAL: Selected IOVA mode 'PA' 00:04:41.646 EAL: Probing VFIO support... 00:04:41.646 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:41.646 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:41.646 EAL: Ask a virtual area of 0x2e000 bytes 00:04:41.646 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:41.646 EAL: Setting up physically contiguous memory... 00:04:41.646 EAL: Setting maximum number of open files to 524288 00:04:41.646 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:41.646 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:41.646 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.646 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:41.646 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.646 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.646 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:41.646 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:41.646 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.646 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:41.646 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.646 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.646 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:41.646 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:41.646 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.646 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:41.646 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.646 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.646 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:41.646 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:41.646 EAL: Ask a virtual area of 0x61000 bytes 00:04:41.646 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:41.646 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:41.646 EAL: Ask a virtual area of 0x400000000 bytes 00:04:41.646 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:41.646 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:41.646 EAL: Hugepages will be freed exactly as allocated. 00:04:41.646 EAL: No shared files mode enabled, IPC is disabled 00:04:41.646 EAL: No shared files mode enabled, IPC is disabled 00:04:41.905 EAL: TSC frequency is ~2200000 KHz 00:04:41.905 EAL: Main lcore 0 is ready (tid=7f69f04d6a40;cpuset=[0]) 00:04:41.905 EAL: Trying to obtain current memory policy. 00:04:41.905 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.905 EAL: Restoring previous memory policy: 0 00:04:41.905 EAL: request: mp_malloc_sync 00:04:41.905 EAL: No shared files mode enabled, IPC is disabled 00:04:41.905 EAL: Heap on socket 0 was expanded by 2MB 00:04:41.905 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:41.905 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:41.905 EAL: Mem event callback 'spdk:(nil)' registered 00:04:41.905 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:41.905 00:04:41.905 00:04:41.905 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.905 http://cunit.sourceforge.net/ 00:04:41.905 00:04:41.905 00:04:41.905 Suite: components_suite 00:04:42.164 Test: vtophys_malloc_test ...passed 00:04:42.164 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:42.164 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.164 EAL: Restoring previous memory policy: 4 00:04:42.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.164 EAL: request: mp_malloc_sync 00:04:42.164 EAL: No shared files mode enabled, IPC is disabled 00:04:42.164 EAL: Heap on socket 0 was expanded by 4MB 00:04:42.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.164 EAL: request: mp_malloc_sync 00:04:42.164 EAL: No shared files mode enabled, IPC is disabled 00:04:42.164 EAL: Heap on socket 0 was shrunk by 4MB 00:04:42.164 EAL: Trying to obtain current memory policy. 00:04:42.164 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.164 EAL: Restoring previous memory policy: 4 00:04:42.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.164 EAL: request: mp_malloc_sync 00:04:42.164 EAL: No shared files mode enabled, IPC is disabled 00:04:42.164 EAL: Heap on socket 0 was expanded by 6MB 00:04:42.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.164 EAL: request: mp_malloc_sync 00:04:42.164 EAL: No shared files mode enabled, IPC is disabled 00:04:42.164 EAL: Heap on socket 0 was shrunk by 6MB 00:04:42.164 EAL: Trying to obtain current memory policy. 00:04:42.164 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.164 EAL: Restoring previous memory policy: 4 00:04:42.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.164 EAL: request: mp_malloc_sync 00:04:42.164 EAL: No shared files mode enabled, IPC is disabled 00:04:42.164 EAL: Heap on socket 0 was expanded by 10MB 00:04:42.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.164 EAL: request: mp_malloc_sync 00:04:42.164 EAL: No shared files mode enabled, IPC is disabled 00:04:42.164 EAL: Heap on socket 0 was shrunk by 10MB 00:04:42.164 EAL: Trying to obtain current memory policy. 00:04:42.164 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.164 EAL: Restoring previous memory policy: 4 00:04:42.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.164 EAL: request: mp_malloc_sync 00:04:42.164 EAL: No shared files mode enabled, IPC is disabled 00:04:42.164 EAL: Heap on socket 0 was expanded by 18MB 00:04:42.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.164 EAL: request: mp_malloc_sync 00:04:42.164 EAL: No shared files mode enabled, IPC is disabled 00:04:42.164 EAL: Heap on socket 0 was shrunk by 18MB 00:04:42.164 EAL: Trying to obtain current memory policy. 00:04:42.164 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.164 EAL: Restoring previous memory policy: 4 00:04:42.164 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.164 EAL: request: mp_malloc_sync 00:04:42.164 EAL: No shared files mode enabled, IPC is disabled 00:04:42.164 EAL: Heap on socket 0 was expanded by 34MB 00:04:42.423 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.423 EAL: request: mp_malloc_sync 00:04:42.423 EAL: No shared files mode enabled, IPC is disabled 00:04:42.423 EAL: Heap on socket 0 was shrunk by 34MB 00:04:42.423 EAL: Trying to obtain current memory policy. 00:04:42.423 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.423 EAL: Restoring previous memory policy: 4 00:04:42.423 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.423 EAL: request: mp_malloc_sync 00:04:42.423 EAL: No shared files mode enabled, IPC is disabled 00:04:42.423 EAL: Heap on socket 0 was expanded by 66MB 00:04:42.423 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.423 EAL: request: mp_malloc_sync 00:04:42.423 EAL: No shared files mode enabled, IPC is disabled 00:04:42.423 EAL: Heap on socket 0 was shrunk by 66MB 00:04:42.423 EAL: Trying to obtain current memory policy. 00:04:42.423 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.423 EAL: Restoring previous memory policy: 4 00:04:42.423 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.423 EAL: request: mp_malloc_sync 00:04:42.423 EAL: No shared files mode enabled, IPC is disabled 00:04:42.423 EAL: Heap on socket 0 was expanded by 130MB 00:04:42.681 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.681 EAL: request: mp_malloc_sync 00:04:42.681 EAL: No shared files mode enabled, IPC is disabled 00:04:42.681 EAL: Heap on socket 0 was shrunk by 130MB 00:04:42.940 EAL: Trying to obtain current memory policy. 00:04:42.940 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:42.940 EAL: Restoring previous memory policy: 4 00:04:42.940 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.940 EAL: request: mp_malloc_sync 00:04:42.940 EAL: No shared files mode enabled, IPC is disabled 00:04:42.940 EAL: Heap on socket 0 was expanded by 258MB 00:04:43.199 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.199 EAL: request: mp_malloc_sync 00:04:43.199 EAL: No shared files mode enabled, IPC is disabled 00:04:43.199 EAL: Heap on socket 0 was shrunk by 258MB 00:04:43.458 EAL: Trying to obtain current memory policy. 00:04:43.458 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.717 EAL: Restoring previous memory policy: 4 00:04:43.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.717 EAL: request: mp_malloc_sync 00:04:43.717 EAL: No shared files mode enabled, IPC is disabled 00:04:43.717 EAL: Heap on socket 0 was expanded by 514MB 00:04:44.650 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.650 EAL: request: mp_malloc_sync 00:04:44.650 EAL: No shared files mode enabled, IPC is disabled 00:04:44.650 EAL: Heap on socket 0 was shrunk by 514MB 00:04:45.216 EAL: Trying to obtain current memory policy. 00:04:45.216 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.216 EAL: Restoring previous memory policy: 4 00:04:45.216 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.216 EAL: request: mp_malloc_sync 00:04:45.216 EAL: No shared files mode enabled, IPC is disabled 00:04:45.216 EAL: Heap on socket 0 was expanded by 1026MB 00:04:46.594 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.594 EAL: request: mp_malloc_sync 00:04:46.594 EAL: No shared files mode enabled, IPC is disabled 00:04:46.594 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:47.972 passed 00:04:47.972 00:04:47.972 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.972 suites 1 1 n/a 0 0 00:04:47.972 tests 2 2 2 0 0 00:04:47.972 asserts 5411 5411 5411 0 n/a 00:04:47.972 00:04:47.972 Elapsed time = 5.969 seconds 00:04:47.972 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.972 EAL: request: mp_malloc_sync 00:04:47.972 EAL: No shared files mode enabled, IPC is disabled 00:04:47.972 EAL: Heap on socket 0 was shrunk by 2MB 00:04:47.972 EAL: No shared files mode enabled, IPC is disabled 00:04:47.973 EAL: No shared files mode enabled, IPC is disabled 00:04:47.973 EAL: No shared files mode enabled, IPC is disabled 00:04:47.973 00:04:47.973 real 0m6.279s 00:04:47.973 user 0m5.451s 00:04:47.973 sys 0m0.665s 00:04:47.973 15:13:01 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.973 ************************************ 00:04:47.973 END TEST env_vtophys 00:04:47.973 ************************************ 00:04:47.973 15:13:01 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:47.973 15:13:01 env -- common/autotest_common.sh@1142 -- # return 0 00:04:47.973 15:13:01 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:47.973 15:13:01 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.973 15:13:01 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.973 15:13:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.973 ************************************ 00:04:47.973 START TEST env_pci 00:04:47.973 ************************************ 00:04:47.973 15:13:01 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:47.973 00:04:47.973 00:04:47.973 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.973 http://cunit.sourceforge.net/ 00:04:47.973 00:04:47.973 00:04:47.973 Suite: pci 00:04:47.973 Test: pci_hook ...[2024-07-11 15:13:01.444820] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 61884 has claimed it 00:04:47.973 passed 00:04:47.973 00:04:47.973 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.973 suites 1 1 n/a 0 0 00:04:47.973 tests 1 1 1 0 0 00:04:47.973 asserts 25 25 25 0 n/a 00:04:47.973 00:04:47.973 Elapsed time = 0.008 seconds 00:04:47.973 EAL: Cannot find device (10000:00:01.0) 00:04:47.973 EAL: Failed to attach device on primary process 00:04:47.973 00:04:47.973 real 0m0.081s 00:04:47.973 user 0m0.039s 00:04:47.973 sys 0m0.041s 00:04:47.973 15:13:01 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.973 15:13:01 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:47.973 ************************************ 00:04:47.973 END TEST env_pci 00:04:47.973 ************************************ 00:04:47.973 15:13:01 env -- common/autotest_common.sh@1142 -- # return 0 00:04:47.973 15:13:01 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:47.973 15:13:01 env -- env/env.sh@15 -- # uname 00:04:47.973 15:13:01 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:47.973 15:13:01 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:47.973 15:13:01 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:47.973 15:13:01 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:47.973 15:13:01 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.973 15:13:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.973 ************************************ 00:04:47.973 START TEST env_dpdk_post_init 00:04:47.973 ************************************ 00:04:47.973 15:13:01 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:48.232 EAL: Detected CPU lcores: 10 00:04:48.232 EAL: Detected NUMA nodes: 1 00:04:48.232 EAL: Detected shared linkage of DPDK 00:04:48.232 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:48.232 EAL: Selected IOVA mode 'PA' 00:04:48.232 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:48.232 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:48.232 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:48.232 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:48.232 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:48.232 Starting DPDK initialization... 00:04:48.232 Starting SPDK post initialization... 00:04:48.232 SPDK NVMe probe 00:04:48.232 Attaching to 0000:00:10.0 00:04:48.232 Attaching to 0000:00:11.0 00:04:48.232 Attaching to 0000:00:12.0 00:04:48.232 Attaching to 0000:00:13.0 00:04:48.232 Attached to 0000:00:10.0 00:04:48.232 Attached to 0000:00:11.0 00:04:48.232 Attached to 0000:00:13.0 00:04:48.232 Attached to 0000:00:12.0 00:04:48.232 Cleaning up... 00:04:48.232 00:04:48.232 real 0m0.253s 00:04:48.232 user 0m0.091s 00:04:48.232 sys 0m0.067s 00:04:48.232 15:13:01 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.232 ************************************ 00:04:48.232 END TEST env_dpdk_post_init 00:04:48.232 15:13:01 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:48.232 ************************************ 00:04:48.232 15:13:01 env -- common/autotest_common.sh@1142 -- # return 0 00:04:48.232 15:13:01 env -- env/env.sh@26 -- # uname 00:04:48.232 15:13:01 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:48.232 15:13:01 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:48.232 15:13:01 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.232 15:13:01 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.232 15:13:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.492 ************************************ 00:04:48.492 START TEST env_mem_callbacks 00:04:48.492 ************************************ 00:04:48.492 15:13:01 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:48.492 EAL: Detected CPU lcores: 10 00:04:48.492 EAL: Detected NUMA nodes: 1 00:04:48.492 EAL: Detected shared linkage of DPDK 00:04:48.492 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:48.492 EAL: Selected IOVA mode 'PA' 00:04:48.492 00:04:48.492 00:04:48.492 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.492 http://cunit.sourceforge.net/ 00:04:48.492 00:04:48.492 00:04:48.492 Suite: memory 00:04:48.492 Test: test ... 00:04:48.492 register 0x200000200000 2097152 00:04:48.492 malloc 3145728 00:04:48.492 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:48.492 register 0x200000400000 4194304 00:04:48.492 buf 0x2000004fffc0 len 3145728 PASSED 00:04:48.492 malloc 64 00:04:48.492 buf 0x2000004ffec0 len 64 PASSED 00:04:48.492 malloc 4194304 00:04:48.492 register 0x200000800000 6291456 00:04:48.492 buf 0x2000009fffc0 len 4194304 PASSED 00:04:48.492 free 0x2000004fffc0 3145728 00:04:48.492 free 0x2000004ffec0 64 00:04:48.492 unregister 0x200000400000 4194304 PASSED 00:04:48.492 free 0x2000009fffc0 4194304 00:04:48.492 unregister 0x200000800000 6291456 PASSED 00:04:48.492 malloc 8388608 00:04:48.492 register 0x200000400000 10485760 00:04:48.492 buf 0x2000005fffc0 len 8388608 PASSED 00:04:48.492 free 0x2000005fffc0 8388608 00:04:48.492 unregister 0x200000400000 10485760 PASSED 00:04:48.492 passed 00:04:48.492 00:04:48.492 Run Summary: Type Total Ran Passed Failed Inactive 00:04:48.492 suites 1 1 n/a 0 0 00:04:48.492 tests 1 1 1 0 0 00:04:48.492 asserts 15 15 15 0 n/a 00:04:48.492 00:04:48.492 Elapsed time = 0.053 seconds 00:04:48.492 00:04:48.492 real 0m0.237s 00:04:48.492 user 0m0.089s 00:04:48.492 sys 0m0.046s 00:04:48.492 15:13:02 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.492 15:13:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:48.492 ************************************ 00:04:48.492 END TEST env_mem_callbacks 00:04:48.492 ************************************ 00:04:48.752 15:13:02 env -- common/autotest_common.sh@1142 -- # return 0 00:04:48.752 ************************************ 00:04:48.752 END TEST env 00:04:48.752 ************************************ 00:04:48.752 00:04:48.752 real 0m7.602s 00:04:48.752 user 0m6.158s 00:04:48.752 sys 0m1.051s 00:04:48.752 15:13:02 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.752 15:13:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.752 15:13:02 -- common/autotest_common.sh@1142 -- # return 0 00:04:48.752 15:13:02 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:48.752 15:13:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.752 15:13:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.752 15:13:02 -- common/autotest_common.sh@10 -- # set +x 00:04:48.752 ************************************ 00:04:48.752 START TEST rpc 00:04:48.752 ************************************ 00:04:48.752 15:13:02 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:48.752 * Looking for test storage... 00:04:48.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:48.752 15:13:02 rpc -- rpc/rpc.sh@65 -- # spdk_pid=62003 00:04:48.752 15:13:02 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:48.752 15:13:02 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.752 15:13:02 rpc -- rpc/rpc.sh@67 -- # waitforlisten 62003 00:04:48.752 15:13:02 rpc -- common/autotest_common.sh@829 -- # '[' -z 62003 ']' 00:04:48.752 15:13:02 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.752 15:13:02 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.752 15:13:02 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.752 15:13:02 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.752 15:13:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.011 [2024-07-11 15:13:02.385164] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:49.011 [2024-07-11 15:13:02.385368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62003 ] 00:04:49.011 [2024-07-11 15:13:02.558549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.271 [2024-07-11 15:13:02.710512] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:49.271 [2024-07-11 15:13:02.710559] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 62003' to capture a snapshot of events at runtime. 00:04:49.271 [2024-07-11 15:13:02.710592] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:49.271 [2024-07-11 15:13:02.710603] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:49.271 [2024-07-11 15:13:02.710614] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid62003 for offline analysis/debug. 00:04:49.271 [2024-07-11 15:13:02.710647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.838 15:13:03 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:49.838 15:13:03 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:49.838 15:13:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:49.838 15:13:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:49.838 15:13:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:49.838 15:13:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:49.839 15:13:03 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.839 15:13:03 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.839 15:13:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.839 ************************************ 00:04:49.839 START TEST rpc_integrity 00:04:49.839 ************************************ 00:04:49.839 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:49.839 15:13:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:49.839 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.839 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.839 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.839 15:13:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:49.839 15:13:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:49.839 15:13:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:49.839 15:13:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:49.839 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.839 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.839 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.839 15:13:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:49.839 15:13:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:49.839 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.839 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:49.839 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.839 15:13:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:49.839 { 00:04:49.839 "name": "Malloc0", 00:04:49.839 "aliases": [ 00:04:49.839 "ba2a3370-525b-4366-9eca-0d91dac358ff" 00:04:49.839 ], 00:04:49.839 "product_name": "Malloc disk", 00:04:49.839 "block_size": 512, 00:04:49.839 "num_blocks": 16384, 00:04:49.839 "uuid": "ba2a3370-525b-4366-9eca-0d91dac358ff", 00:04:49.839 "assigned_rate_limits": { 00:04:49.839 "rw_ios_per_sec": 0, 00:04:49.839 "rw_mbytes_per_sec": 0, 00:04:49.839 "r_mbytes_per_sec": 0, 00:04:49.839 "w_mbytes_per_sec": 0 00:04:49.839 }, 00:04:49.839 "claimed": false, 00:04:49.839 "zoned": false, 00:04:49.839 "supported_io_types": { 00:04:49.839 "read": true, 00:04:49.839 "write": true, 00:04:49.839 "unmap": true, 00:04:49.839 "flush": true, 00:04:49.839 "reset": true, 00:04:49.839 "nvme_admin": false, 00:04:49.839 "nvme_io": false, 00:04:49.839 "nvme_io_md": false, 00:04:49.839 "write_zeroes": true, 00:04:49.839 "zcopy": true, 00:04:49.839 "get_zone_info": false, 00:04:49.839 "zone_management": false, 00:04:49.839 "zone_append": false, 00:04:49.839 "compare": false, 00:04:49.839 "compare_and_write": false, 00:04:49.839 "abort": true, 00:04:49.839 "seek_hole": false, 00:04:49.839 "seek_data": false, 00:04:49.839 "copy": true, 00:04:49.839 "nvme_iov_md": false 00:04:49.839 }, 00:04:49.839 "memory_domains": [ 00:04:49.839 { 00:04:49.839 "dma_device_id": "system", 00:04:49.839 "dma_device_type": 1 00:04:49.839 }, 00:04:49.839 { 00:04:49.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:49.839 "dma_device_type": 2 00:04:49.839 } 00:04:49.839 ], 00:04:49.839 "driver_specific": {} 00:04:49.839 } 00:04:49.839 ]' 00:04:49.839 15:13:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:50.097 15:13:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:50.097 15:13:03 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:50.097 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.097 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.097 [2024-07-11 15:13:03.498775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:50.097 [2024-07-11 15:13:03.498854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:50.097 [2024-07-11 15:13:03.498899] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:50.097 [2024-07-11 15:13:03.498919] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:50.097 [2024-07-11 15:13:03.501474] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:50.097 [2024-07-11 15:13:03.501512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:50.097 Passthru0 00:04:50.097 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.097 15:13:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:50.097 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.097 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.097 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.097 15:13:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:50.097 { 00:04:50.097 "name": "Malloc0", 00:04:50.097 "aliases": [ 00:04:50.097 "ba2a3370-525b-4366-9eca-0d91dac358ff" 00:04:50.097 ], 00:04:50.097 "product_name": "Malloc disk", 00:04:50.097 "block_size": 512, 00:04:50.097 "num_blocks": 16384, 00:04:50.097 "uuid": "ba2a3370-525b-4366-9eca-0d91dac358ff", 00:04:50.097 "assigned_rate_limits": { 00:04:50.097 "rw_ios_per_sec": 0, 00:04:50.097 "rw_mbytes_per_sec": 0, 00:04:50.097 "r_mbytes_per_sec": 0, 00:04:50.097 "w_mbytes_per_sec": 0 00:04:50.097 }, 00:04:50.097 "claimed": true, 00:04:50.097 "claim_type": "exclusive_write", 00:04:50.097 "zoned": false, 00:04:50.097 "supported_io_types": { 00:04:50.097 "read": true, 00:04:50.097 "write": true, 00:04:50.097 "unmap": true, 00:04:50.097 "flush": true, 00:04:50.097 "reset": true, 00:04:50.097 "nvme_admin": false, 00:04:50.097 "nvme_io": false, 00:04:50.097 "nvme_io_md": false, 00:04:50.097 "write_zeroes": true, 00:04:50.097 "zcopy": true, 00:04:50.097 "get_zone_info": false, 00:04:50.097 "zone_management": false, 00:04:50.097 "zone_append": false, 00:04:50.097 "compare": false, 00:04:50.097 "compare_and_write": false, 00:04:50.097 "abort": true, 00:04:50.097 "seek_hole": false, 00:04:50.097 "seek_data": false, 00:04:50.097 "copy": true, 00:04:50.097 "nvme_iov_md": false 00:04:50.097 }, 00:04:50.097 "memory_domains": [ 00:04:50.097 { 00:04:50.097 "dma_device_id": "system", 00:04:50.097 "dma_device_type": 1 00:04:50.097 }, 00:04:50.097 { 00:04:50.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.097 "dma_device_type": 2 00:04:50.097 } 00:04:50.097 ], 00:04:50.097 "driver_specific": {} 00:04:50.097 }, 00:04:50.097 { 00:04:50.097 "name": "Passthru0", 00:04:50.097 "aliases": [ 00:04:50.097 "5a55fa8d-706f-5764-ae8d-1a4c94d40009" 00:04:50.097 ], 00:04:50.097 "product_name": "passthru", 00:04:50.097 "block_size": 512, 00:04:50.097 "num_blocks": 16384, 00:04:50.097 "uuid": "5a55fa8d-706f-5764-ae8d-1a4c94d40009", 00:04:50.097 "assigned_rate_limits": { 00:04:50.097 "rw_ios_per_sec": 0, 00:04:50.097 "rw_mbytes_per_sec": 0, 00:04:50.097 "r_mbytes_per_sec": 0, 00:04:50.097 "w_mbytes_per_sec": 0 00:04:50.097 }, 00:04:50.097 "claimed": false, 00:04:50.097 "zoned": false, 00:04:50.097 "supported_io_types": { 00:04:50.097 "read": true, 00:04:50.097 "write": true, 00:04:50.097 "unmap": true, 00:04:50.097 "flush": true, 00:04:50.097 "reset": true, 00:04:50.097 "nvme_admin": false, 00:04:50.097 "nvme_io": false, 00:04:50.097 "nvme_io_md": false, 00:04:50.097 "write_zeroes": true, 00:04:50.097 "zcopy": true, 00:04:50.097 "get_zone_info": false, 00:04:50.097 "zone_management": false, 00:04:50.097 "zone_append": false, 00:04:50.097 "compare": false, 00:04:50.097 "compare_and_write": false, 00:04:50.097 "abort": true, 00:04:50.097 "seek_hole": false, 00:04:50.097 "seek_data": false, 00:04:50.097 "copy": true, 00:04:50.097 "nvme_iov_md": false 00:04:50.097 }, 00:04:50.097 "memory_domains": [ 00:04:50.097 { 00:04:50.097 "dma_device_id": "system", 00:04:50.097 "dma_device_type": 1 00:04:50.097 }, 00:04:50.097 { 00:04:50.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.097 "dma_device_type": 2 00:04:50.097 } 00:04:50.097 ], 00:04:50.097 "driver_specific": { 00:04:50.097 "passthru": { 00:04:50.097 "name": "Passthru0", 00:04:50.097 "base_bdev_name": "Malloc0" 00:04:50.097 } 00:04:50.097 } 00:04:50.097 } 00:04:50.097 ]' 00:04:50.097 15:13:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:50.098 15:13:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:50.098 15:13:03 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:50.098 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.098 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.098 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.098 15:13:03 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:50.098 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.098 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.098 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.098 15:13:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:50.098 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.098 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.098 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.098 15:13:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:50.098 15:13:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:50.098 ************************************ 00:04:50.098 END TEST rpc_integrity 00:04:50.098 ************************************ 00:04:50.098 15:13:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:50.098 00:04:50.098 real 0m0.350s 00:04:50.098 user 0m0.217s 00:04:50.098 sys 0m0.040s 00:04:50.098 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.098 15:13:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.356 15:13:03 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:50.356 15:13:03 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:50.356 15:13:03 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.356 15:13:03 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.356 15:13:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.356 ************************************ 00:04:50.356 START TEST rpc_plugins 00:04:50.356 ************************************ 00:04:50.356 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:50.356 15:13:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:50.356 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.356 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:50.356 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.356 15:13:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:50.356 15:13:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:50.356 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.356 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:50.356 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.356 15:13:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:50.356 { 00:04:50.356 "name": "Malloc1", 00:04:50.356 "aliases": [ 00:04:50.356 "f847bae7-7dc0-4596-8a37-2b4b12855447" 00:04:50.356 ], 00:04:50.356 "product_name": "Malloc disk", 00:04:50.356 "block_size": 4096, 00:04:50.356 "num_blocks": 256, 00:04:50.356 "uuid": "f847bae7-7dc0-4596-8a37-2b4b12855447", 00:04:50.356 "assigned_rate_limits": { 00:04:50.356 "rw_ios_per_sec": 0, 00:04:50.356 "rw_mbytes_per_sec": 0, 00:04:50.356 "r_mbytes_per_sec": 0, 00:04:50.356 "w_mbytes_per_sec": 0 00:04:50.356 }, 00:04:50.356 "claimed": false, 00:04:50.356 "zoned": false, 00:04:50.356 "supported_io_types": { 00:04:50.356 "read": true, 00:04:50.356 "write": true, 00:04:50.356 "unmap": true, 00:04:50.356 "flush": true, 00:04:50.356 "reset": true, 00:04:50.356 "nvme_admin": false, 00:04:50.356 "nvme_io": false, 00:04:50.356 "nvme_io_md": false, 00:04:50.356 "write_zeroes": true, 00:04:50.356 "zcopy": true, 00:04:50.356 "get_zone_info": false, 00:04:50.356 "zone_management": false, 00:04:50.356 "zone_append": false, 00:04:50.356 "compare": false, 00:04:50.356 "compare_and_write": false, 00:04:50.356 "abort": true, 00:04:50.356 "seek_hole": false, 00:04:50.356 "seek_data": false, 00:04:50.356 "copy": true, 00:04:50.356 "nvme_iov_md": false 00:04:50.356 }, 00:04:50.356 "memory_domains": [ 00:04:50.356 { 00:04:50.356 "dma_device_id": "system", 00:04:50.356 "dma_device_type": 1 00:04:50.356 }, 00:04:50.356 { 00:04:50.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.356 "dma_device_type": 2 00:04:50.356 } 00:04:50.356 ], 00:04:50.356 "driver_specific": {} 00:04:50.356 } 00:04:50.356 ]' 00:04:50.356 15:13:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:50.356 15:13:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:50.357 15:13:03 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:50.357 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.357 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:50.357 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.357 15:13:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:50.357 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.357 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:50.357 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.357 15:13:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:50.357 15:13:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:50.357 ************************************ 00:04:50.357 END TEST rpc_plugins 00:04:50.357 ************************************ 00:04:50.357 15:13:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:50.357 00:04:50.357 real 0m0.164s 00:04:50.357 user 0m0.103s 00:04:50.357 sys 0m0.022s 00:04:50.357 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.357 15:13:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:50.357 15:13:03 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:50.357 15:13:03 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:50.357 15:13:03 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.357 15:13:03 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.357 15:13:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.357 ************************************ 00:04:50.357 START TEST rpc_trace_cmd_test 00:04:50.357 ************************************ 00:04:50.357 15:13:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:50.357 15:13:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:50.357 15:13:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:50.357 15:13:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.357 15:13:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:50.616 15:13:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.616 15:13:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:50.616 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid62003", 00:04:50.616 "tpoint_group_mask": "0x8", 00:04:50.616 "iscsi_conn": { 00:04:50.616 "mask": "0x2", 00:04:50.616 "tpoint_mask": "0x0" 00:04:50.616 }, 00:04:50.616 "scsi": { 00:04:50.616 "mask": "0x4", 00:04:50.616 "tpoint_mask": "0x0" 00:04:50.616 }, 00:04:50.616 "bdev": { 00:04:50.616 "mask": "0x8", 00:04:50.616 "tpoint_mask": "0xffffffffffffffff" 00:04:50.616 }, 00:04:50.616 "nvmf_rdma": { 00:04:50.616 "mask": "0x10", 00:04:50.616 "tpoint_mask": "0x0" 00:04:50.616 }, 00:04:50.616 "nvmf_tcp": { 00:04:50.616 "mask": "0x20", 00:04:50.616 "tpoint_mask": "0x0" 00:04:50.616 }, 00:04:50.616 "ftl": { 00:04:50.616 "mask": "0x40", 00:04:50.616 "tpoint_mask": "0x0" 00:04:50.616 }, 00:04:50.616 "blobfs": { 00:04:50.616 "mask": "0x80", 00:04:50.616 "tpoint_mask": "0x0" 00:04:50.616 }, 00:04:50.616 "dsa": { 00:04:50.616 "mask": "0x200", 00:04:50.616 "tpoint_mask": "0x0" 00:04:50.616 }, 00:04:50.616 "thread": { 00:04:50.616 "mask": "0x400", 00:04:50.616 "tpoint_mask": "0x0" 00:04:50.616 }, 00:04:50.616 "nvme_pcie": { 00:04:50.616 "mask": "0x800", 00:04:50.616 "tpoint_mask": "0x0" 00:04:50.616 }, 00:04:50.616 "iaa": { 00:04:50.616 "mask": "0x1000", 00:04:50.616 "tpoint_mask": "0x0" 00:04:50.616 }, 00:04:50.616 "nvme_tcp": { 00:04:50.616 "mask": "0x2000", 00:04:50.616 "tpoint_mask": "0x0" 00:04:50.616 }, 00:04:50.616 "bdev_nvme": { 00:04:50.616 "mask": "0x4000", 00:04:50.616 "tpoint_mask": "0x0" 00:04:50.616 }, 00:04:50.616 "sock": { 00:04:50.616 "mask": "0x8000", 00:04:50.616 "tpoint_mask": "0x0" 00:04:50.616 } 00:04:50.616 }' 00:04:50.616 15:13:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:50.616 15:13:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:50.616 15:13:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:50.616 15:13:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:50.616 15:13:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:50.616 15:13:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:50.616 15:13:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:50.616 15:13:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:50.616 15:13:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:50.616 ************************************ 00:04:50.616 END TEST rpc_trace_cmd_test 00:04:50.616 ************************************ 00:04:50.616 15:13:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:50.616 00:04:50.616 real 0m0.263s 00:04:50.616 user 0m0.224s 00:04:50.616 sys 0m0.031s 00:04:50.616 15:13:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.616 15:13:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:50.875 15:13:04 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:50.875 15:13:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:50.875 15:13:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:50.875 15:13:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:50.875 15:13:04 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.875 15:13:04 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.875 15:13:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.875 ************************************ 00:04:50.875 START TEST rpc_daemon_integrity 00:04:50.875 ************************************ 00:04:50.875 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:50.875 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:50.875 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.875 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.875 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.875 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:50.875 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:50.875 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:50.875 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:50.875 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.875 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.875 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.875 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:50.875 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:50.875 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.875 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.875 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.875 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:50.875 { 00:04:50.875 "name": "Malloc2", 00:04:50.875 "aliases": [ 00:04:50.875 "0a2bb138-b8f4-4b65-b6bb-516842dac556" 00:04:50.875 ], 00:04:50.875 "product_name": "Malloc disk", 00:04:50.875 "block_size": 512, 00:04:50.875 "num_blocks": 16384, 00:04:50.875 "uuid": "0a2bb138-b8f4-4b65-b6bb-516842dac556", 00:04:50.875 "assigned_rate_limits": { 00:04:50.875 "rw_ios_per_sec": 0, 00:04:50.875 "rw_mbytes_per_sec": 0, 00:04:50.875 "r_mbytes_per_sec": 0, 00:04:50.875 "w_mbytes_per_sec": 0 00:04:50.875 }, 00:04:50.875 "claimed": false, 00:04:50.875 "zoned": false, 00:04:50.875 "supported_io_types": { 00:04:50.875 "read": true, 00:04:50.875 "write": true, 00:04:50.875 "unmap": true, 00:04:50.875 "flush": true, 00:04:50.875 "reset": true, 00:04:50.875 "nvme_admin": false, 00:04:50.875 "nvme_io": false, 00:04:50.875 "nvme_io_md": false, 00:04:50.875 "write_zeroes": true, 00:04:50.875 "zcopy": true, 00:04:50.875 "get_zone_info": false, 00:04:50.875 "zone_management": false, 00:04:50.875 "zone_append": false, 00:04:50.875 "compare": false, 00:04:50.875 "compare_and_write": false, 00:04:50.875 "abort": true, 00:04:50.875 "seek_hole": false, 00:04:50.875 "seek_data": false, 00:04:50.875 "copy": true, 00:04:50.875 "nvme_iov_md": false 00:04:50.875 }, 00:04:50.875 "memory_domains": [ 00:04:50.875 { 00:04:50.875 "dma_device_id": "system", 00:04:50.875 "dma_device_type": 1 00:04:50.875 }, 00:04:50.875 { 00:04:50.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.875 "dma_device_type": 2 00:04:50.875 } 00:04:50.875 ], 00:04:50.875 "driver_specific": {} 00:04:50.875 } 00:04:50.875 ]' 00:04:50.876 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:50.876 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:50.876 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:50.876 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.876 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.876 [2024-07-11 15:13:04.422686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:50.876 [2024-07-11 15:13:04.422756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:50.876 [2024-07-11 15:13:04.422785] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:50.876 [2024-07-11 15:13:04.422798] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:50.876 [2024-07-11 15:13:04.425289] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:50.876 [2024-07-11 15:13:04.425329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:50.876 Passthru0 00:04:50.876 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.876 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:50.876 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.876 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.876 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.876 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:50.876 { 00:04:50.876 "name": "Malloc2", 00:04:50.876 "aliases": [ 00:04:50.876 "0a2bb138-b8f4-4b65-b6bb-516842dac556" 00:04:50.876 ], 00:04:50.876 "product_name": "Malloc disk", 00:04:50.876 "block_size": 512, 00:04:50.876 "num_blocks": 16384, 00:04:50.876 "uuid": "0a2bb138-b8f4-4b65-b6bb-516842dac556", 00:04:50.876 "assigned_rate_limits": { 00:04:50.876 "rw_ios_per_sec": 0, 00:04:50.876 "rw_mbytes_per_sec": 0, 00:04:50.876 "r_mbytes_per_sec": 0, 00:04:50.876 "w_mbytes_per_sec": 0 00:04:50.876 }, 00:04:50.876 "claimed": true, 00:04:50.876 "claim_type": "exclusive_write", 00:04:50.876 "zoned": false, 00:04:50.876 "supported_io_types": { 00:04:50.876 "read": true, 00:04:50.876 "write": true, 00:04:50.876 "unmap": true, 00:04:50.876 "flush": true, 00:04:50.876 "reset": true, 00:04:50.876 "nvme_admin": false, 00:04:50.876 "nvme_io": false, 00:04:50.876 "nvme_io_md": false, 00:04:50.876 "write_zeroes": true, 00:04:50.876 "zcopy": true, 00:04:50.876 "get_zone_info": false, 00:04:50.876 "zone_management": false, 00:04:50.876 "zone_append": false, 00:04:50.876 "compare": false, 00:04:50.876 "compare_and_write": false, 00:04:50.876 "abort": true, 00:04:50.876 "seek_hole": false, 00:04:50.876 "seek_data": false, 00:04:50.876 "copy": true, 00:04:50.876 "nvme_iov_md": false 00:04:50.876 }, 00:04:50.876 "memory_domains": [ 00:04:50.876 { 00:04:50.876 "dma_device_id": "system", 00:04:50.876 "dma_device_type": 1 00:04:50.876 }, 00:04:50.876 { 00:04:50.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.876 "dma_device_type": 2 00:04:50.876 } 00:04:50.876 ], 00:04:50.876 "driver_specific": {} 00:04:50.876 }, 00:04:50.876 { 00:04:50.876 "name": "Passthru0", 00:04:50.876 "aliases": [ 00:04:50.876 "6ae52c01-9100-52c9-9d4c-055233f4bff7" 00:04:50.876 ], 00:04:50.876 "product_name": "passthru", 00:04:50.876 "block_size": 512, 00:04:50.876 "num_blocks": 16384, 00:04:50.876 "uuid": "6ae52c01-9100-52c9-9d4c-055233f4bff7", 00:04:50.876 "assigned_rate_limits": { 00:04:50.876 "rw_ios_per_sec": 0, 00:04:50.876 "rw_mbytes_per_sec": 0, 00:04:50.876 "r_mbytes_per_sec": 0, 00:04:50.876 "w_mbytes_per_sec": 0 00:04:50.876 }, 00:04:50.876 "claimed": false, 00:04:50.876 "zoned": false, 00:04:50.876 "supported_io_types": { 00:04:50.876 "read": true, 00:04:50.876 "write": true, 00:04:50.876 "unmap": true, 00:04:50.876 "flush": true, 00:04:50.876 "reset": true, 00:04:50.876 "nvme_admin": false, 00:04:50.876 "nvme_io": false, 00:04:50.876 "nvme_io_md": false, 00:04:50.876 "write_zeroes": true, 00:04:50.876 "zcopy": true, 00:04:50.876 "get_zone_info": false, 00:04:50.876 "zone_management": false, 00:04:50.876 "zone_append": false, 00:04:50.876 "compare": false, 00:04:50.876 "compare_and_write": false, 00:04:50.876 "abort": true, 00:04:50.876 "seek_hole": false, 00:04:50.876 "seek_data": false, 00:04:50.876 "copy": true, 00:04:50.876 "nvme_iov_md": false 00:04:50.876 }, 00:04:50.876 "memory_domains": [ 00:04:50.876 { 00:04:50.876 "dma_device_id": "system", 00:04:50.876 "dma_device_type": 1 00:04:50.876 }, 00:04:50.876 { 00:04:50.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.876 "dma_device_type": 2 00:04:50.876 } 00:04:50.876 ], 00:04:50.876 "driver_specific": { 00:04:50.876 "passthru": { 00:04:50.876 "name": "Passthru0", 00:04:50.876 "base_bdev_name": "Malloc2" 00:04:50.876 } 00:04:50.876 } 00:04:50.876 } 00:04:50.876 ]' 00:04:50.876 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:51.136 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:51.136 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:51.136 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.136 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.136 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.136 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:51.136 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.136 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.136 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.136 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:51.136 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.136 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.136 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.136 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:51.136 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:51.136 ************************************ 00:04:51.136 END TEST rpc_daemon_integrity 00:04:51.136 ************************************ 00:04:51.136 15:13:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:51.136 00:04:51.136 real 0m0.341s 00:04:51.136 user 0m0.216s 00:04:51.136 sys 0m0.036s 00:04:51.136 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.136 15:13:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.136 15:13:04 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:51.136 15:13:04 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:51.136 15:13:04 rpc -- rpc/rpc.sh@84 -- # killprocess 62003 00:04:51.136 15:13:04 rpc -- common/autotest_common.sh@948 -- # '[' -z 62003 ']' 00:04:51.136 15:13:04 rpc -- common/autotest_common.sh@952 -- # kill -0 62003 00:04:51.136 15:13:04 rpc -- common/autotest_common.sh@953 -- # uname 00:04:51.136 15:13:04 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.136 15:13:04 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62003 00:04:51.136 killing process with pid 62003 00:04:51.136 15:13:04 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:51.136 15:13:04 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:51.136 15:13:04 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62003' 00:04:51.136 15:13:04 rpc -- common/autotest_common.sh@967 -- # kill 62003 00:04:51.136 15:13:04 rpc -- common/autotest_common.sh@972 -- # wait 62003 00:04:53.050 00:04:53.050 real 0m4.313s 00:04:53.050 user 0m5.130s 00:04:53.050 sys 0m0.703s 00:04:53.050 15:13:06 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.050 15:13:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.050 ************************************ 00:04:53.050 END TEST rpc 00:04:53.050 ************************************ 00:04:53.050 15:13:06 -- common/autotest_common.sh@1142 -- # return 0 00:04:53.050 15:13:06 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:53.050 15:13:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.050 15:13:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.050 15:13:06 -- common/autotest_common.sh@10 -- # set +x 00:04:53.050 ************************************ 00:04:53.050 START TEST skip_rpc 00:04:53.050 ************************************ 00:04:53.050 15:13:06 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:53.050 * Looking for test storage... 00:04:53.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:53.050 15:13:06 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:53.050 15:13:06 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:53.050 15:13:06 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:53.050 15:13:06 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.050 15:13:06 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.050 15:13:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.050 ************************************ 00:04:53.050 START TEST skip_rpc 00:04:53.050 ************************************ 00:04:53.050 15:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:53.050 15:13:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=62213 00:04:53.050 15:13:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.051 15:13:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:53.051 15:13:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:53.341 [2024-07-11 15:13:06.764380] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:53.341 [2024-07-11 15:13:06.764583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62213 ] 00:04:53.341 [2024-07-11 15:13:06.941921] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.601 [2024-07-11 15:13:07.148741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 62213 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 62213 ']' 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 62213 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62213 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62213' 00:04:58.872 killing process with pid 62213 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 62213 00:04:58.872 15:13:11 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 62213 00:04:59.807 ************************************ 00:04:59.807 END TEST skip_rpc 00:04:59.807 ************************************ 00:04:59.807 00:04:59.807 real 0m6.748s 00:04:59.807 user 0m6.325s 00:04:59.807 sys 0m0.315s 00:04:59.807 15:13:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.807 15:13:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.064 15:13:13 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:00.064 15:13:13 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:00.064 15:13:13 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.064 15:13:13 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.064 15:13:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.064 ************************************ 00:05:00.064 START TEST skip_rpc_with_json 00:05:00.064 ************************************ 00:05:00.064 15:13:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:00.064 15:13:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:00.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.064 15:13:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=62312 00:05:00.064 15:13:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.064 15:13:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.064 15:13:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 62312 00:05:00.064 15:13:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 62312 ']' 00:05:00.064 15:13:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.064 15:13:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.064 15:13:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.065 15:13:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.065 15:13:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.065 [2024-07-11 15:13:13.562749] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:00.065 [2024-07-11 15:13:13.563134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62312 ] 00:05:00.323 [2024-07-11 15:13:13.729848] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.323 [2024-07-11 15:13:13.873771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.890 15:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.890 15:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:00.890 15:13:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:00.890 15:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.890 15:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.890 [2024-07-11 15:13:14.466024] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:00.890 request: 00:05:00.890 { 00:05:00.890 "trtype": "tcp", 00:05:00.890 "method": "nvmf_get_transports", 00:05:00.890 "req_id": 1 00:05:00.890 } 00:05:00.890 Got JSON-RPC error response 00:05:00.890 response: 00:05:00.890 { 00:05:00.890 "code": -19, 00:05:00.890 "message": "No such device" 00:05:00.890 } 00:05:00.890 15:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:00.890 15:13:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:00.890 15:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.891 15:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.891 [2024-07-11 15:13:14.478190] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:00.891 15:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.891 15:13:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:00.891 15:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.891 15:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:01.149 15:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.149 15:13:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:01.149 { 00:05:01.149 "subsystems": [ 00:05:01.149 { 00:05:01.149 "subsystem": "keyring", 00:05:01.149 "config": [] 00:05:01.149 }, 00:05:01.149 { 00:05:01.149 "subsystem": "iobuf", 00:05:01.149 "config": [ 00:05:01.149 { 00:05:01.149 "method": "iobuf_set_options", 00:05:01.149 "params": { 00:05:01.149 "small_pool_count": 8192, 00:05:01.149 "large_pool_count": 1024, 00:05:01.149 "small_bufsize": 8192, 00:05:01.149 "large_bufsize": 135168 00:05:01.149 } 00:05:01.149 } 00:05:01.149 ] 00:05:01.149 }, 00:05:01.149 { 00:05:01.149 "subsystem": "sock", 00:05:01.149 "config": [ 00:05:01.149 { 00:05:01.149 "method": "sock_set_default_impl", 00:05:01.149 "params": { 00:05:01.149 "impl_name": "posix" 00:05:01.150 } 00:05:01.150 }, 00:05:01.150 { 00:05:01.150 "method": "sock_impl_set_options", 00:05:01.150 "params": { 00:05:01.150 "impl_name": "ssl", 00:05:01.150 "recv_buf_size": 4096, 00:05:01.150 "send_buf_size": 4096, 00:05:01.150 "enable_recv_pipe": true, 00:05:01.150 "enable_quickack": false, 00:05:01.150 "enable_placement_id": 0, 00:05:01.150 "enable_zerocopy_send_server": true, 00:05:01.150 "enable_zerocopy_send_client": false, 00:05:01.150 "zerocopy_threshold": 0, 00:05:01.150 "tls_version": 0, 00:05:01.150 "enable_ktls": false 00:05:01.150 } 00:05:01.150 }, 00:05:01.150 { 00:05:01.150 "method": "sock_impl_set_options", 00:05:01.150 "params": { 00:05:01.150 "impl_name": "posix", 00:05:01.150 "recv_buf_size": 2097152, 00:05:01.150 "send_buf_size": 2097152, 00:05:01.150 "enable_recv_pipe": true, 00:05:01.150 "enable_quickack": false, 00:05:01.150 "enable_placement_id": 0, 00:05:01.150 "enable_zerocopy_send_server": true, 00:05:01.150 "enable_zerocopy_send_client": false, 00:05:01.150 "zerocopy_threshold": 0, 00:05:01.150 "tls_version": 0, 00:05:01.150 "enable_ktls": false 00:05:01.150 } 00:05:01.150 } 00:05:01.150 ] 00:05:01.150 }, 00:05:01.150 { 00:05:01.150 "subsystem": "vmd", 00:05:01.150 "config": [] 00:05:01.150 }, 00:05:01.150 { 00:05:01.150 "subsystem": "accel", 00:05:01.150 "config": [ 00:05:01.150 { 00:05:01.150 "method": "accel_set_options", 00:05:01.150 "params": { 00:05:01.150 "small_cache_size": 128, 00:05:01.150 "large_cache_size": 16, 00:05:01.150 "task_count": 2048, 00:05:01.150 "sequence_count": 2048, 00:05:01.150 "buf_count": 2048 00:05:01.150 } 00:05:01.150 } 00:05:01.150 ] 00:05:01.150 }, 00:05:01.150 { 00:05:01.150 "subsystem": "bdev", 00:05:01.150 "config": [ 00:05:01.150 { 00:05:01.150 "method": "bdev_set_options", 00:05:01.150 "params": { 00:05:01.150 "bdev_io_pool_size": 65535, 00:05:01.150 "bdev_io_cache_size": 256, 00:05:01.150 "bdev_auto_examine": true, 00:05:01.150 "iobuf_small_cache_size": 128, 00:05:01.150 "iobuf_large_cache_size": 16 00:05:01.150 } 00:05:01.150 }, 00:05:01.150 { 00:05:01.150 "method": "bdev_raid_set_options", 00:05:01.150 "params": { 00:05:01.150 "process_window_size_kb": 1024 00:05:01.150 } 00:05:01.150 }, 00:05:01.150 { 00:05:01.150 "method": "bdev_iscsi_set_options", 00:05:01.150 "params": { 00:05:01.150 "timeout_sec": 30 00:05:01.150 } 00:05:01.150 }, 00:05:01.150 { 00:05:01.150 "method": "bdev_nvme_set_options", 00:05:01.150 "params": { 00:05:01.150 "action_on_timeout": "none", 00:05:01.150 "timeout_us": 0, 00:05:01.150 "timeout_admin_us": 0, 00:05:01.150 "keep_alive_timeout_ms": 10000, 00:05:01.150 "arbitration_burst": 0, 00:05:01.150 "low_priority_weight": 0, 00:05:01.150 "medium_priority_weight": 0, 00:05:01.150 "high_priority_weight": 0, 00:05:01.150 "nvme_adminq_poll_period_us": 10000, 00:05:01.150 "nvme_ioq_poll_period_us": 0, 00:05:01.150 "io_queue_requests": 0, 00:05:01.150 "delay_cmd_submit": true, 00:05:01.150 "transport_retry_count": 4, 00:05:01.150 "bdev_retry_count": 3, 00:05:01.150 "transport_ack_timeout": 0, 00:05:01.150 "ctrlr_loss_timeout_sec": 0, 00:05:01.150 "reconnect_delay_sec": 0, 00:05:01.150 "fast_io_fail_timeout_sec": 0, 00:05:01.150 "disable_auto_failback": false, 00:05:01.150 "generate_uuids": false, 00:05:01.150 "transport_tos": 0, 00:05:01.150 "nvme_error_stat": false, 00:05:01.150 "rdma_srq_size": 0, 00:05:01.150 "io_path_stat": false, 00:05:01.150 "allow_accel_sequence": false, 00:05:01.150 "rdma_max_cq_size": 0, 00:05:01.150 "rdma_cm_event_timeout_ms": 0, 00:05:01.150 "dhchap_digests": [ 00:05:01.150 "sha256", 00:05:01.150 "sha384", 00:05:01.150 "sha512" 00:05:01.150 ], 00:05:01.150 "dhchap_dhgroups": [ 00:05:01.150 "null", 00:05:01.150 "ffdhe2048", 00:05:01.150 "ffdhe3072", 00:05:01.150 "ffdhe4096", 00:05:01.150 "ffdhe6144", 00:05:01.150 "ffdhe8192" 00:05:01.150 ] 00:05:01.150 } 00:05:01.150 }, 00:05:01.150 { 00:05:01.150 "method": "bdev_nvme_set_hotplug", 00:05:01.150 "params": { 00:05:01.150 "period_us": 100000, 00:05:01.150 "enable": false 00:05:01.150 } 00:05:01.150 }, 00:05:01.150 { 00:05:01.150 "method": "bdev_wait_for_examine" 00:05:01.150 } 00:05:01.150 ] 00:05:01.150 }, 00:05:01.150 { 00:05:01.150 "subsystem": "scsi", 00:05:01.150 "config": null 00:05:01.150 }, 00:05:01.150 { 00:05:01.150 "subsystem": "scheduler", 00:05:01.150 "config": [ 00:05:01.150 { 00:05:01.150 "method": "framework_set_scheduler", 00:05:01.150 "params": { 00:05:01.150 "name": "static" 00:05:01.150 } 00:05:01.150 } 00:05:01.150 ] 00:05:01.150 }, 00:05:01.150 { 00:05:01.150 "subsystem": "vhost_scsi", 00:05:01.150 "config": [] 00:05:01.150 }, 00:05:01.150 { 00:05:01.150 "subsystem": "vhost_blk", 00:05:01.150 "config": [] 00:05:01.150 }, 00:05:01.150 { 00:05:01.150 "subsystem": "ublk", 00:05:01.150 "config": [] 00:05:01.150 }, 00:05:01.150 { 00:05:01.150 "subsystem": "nbd", 00:05:01.150 "config": [] 00:05:01.150 }, 00:05:01.150 { 00:05:01.150 "subsystem": "nvmf", 00:05:01.150 "config": [ 00:05:01.150 { 00:05:01.150 "method": "nvmf_set_config", 00:05:01.150 "params": { 00:05:01.150 "discovery_filter": "match_any", 00:05:01.150 "admin_cmd_passthru": { 00:05:01.150 "identify_ctrlr": false 00:05:01.150 } 00:05:01.150 } 00:05:01.150 }, 00:05:01.150 { 00:05:01.150 "method": "nvmf_set_max_subsystems", 00:05:01.150 "params": { 00:05:01.150 "max_subsystems": 1024 00:05:01.150 } 00:05:01.150 }, 00:05:01.150 { 00:05:01.150 "method": "nvmf_set_crdt", 00:05:01.150 "params": { 00:05:01.150 "crdt1": 0, 00:05:01.150 "crdt2": 0, 00:05:01.150 "crdt3": 0 00:05:01.150 } 00:05:01.150 }, 00:05:01.150 { 00:05:01.150 "method": "nvmf_create_transport", 00:05:01.150 "params": { 00:05:01.150 "trtype": "TCP", 00:05:01.150 "max_queue_depth": 128, 00:05:01.150 "max_io_qpairs_per_ctrlr": 127, 00:05:01.150 "in_capsule_data_size": 4096, 00:05:01.150 "max_io_size": 131072, 00:05:01.150 "io_unit_size": 131072, 00:05:01.150 "max_aq_depth": 128, 00:05:01.150 "num_shared_buffers": 511, 00:05:01.150 "buf_cache_size": 4294967295, 00:05:01.150 "dif_insert_or_strip": false, 00:05:01.150 "zcopy": false, 00:05:01.150 "c2h_success": true, 00:05:01.150 "sock_priority": 0, 00:05:01.150 "abort_timeout_sec": 1, 00:05:01.150 "ack_timeout": 0, 00:05:01.150 "data_wr_pool_size": 0 00:05:01.150 } 00:05:01.150 } 00:05:01.150 ] 00:05:01.150 }, 00:05:01.150 { 00:05:01.150 "subsystem": "iscsi", 00:05:01.150 "config": [ 00:05:01.150 { 00:05:01.150 "method": "iscsi_set_options", 00:05:01.150 "params": { 00:05:01.150 "node_base": "iqn.2016-06.io.spdk", 00:05:01.150 "max_sessions": 128, 00:05:01.150 "max_connections_per_session": 2, 00:05:01.150 "max_queue_depth": 64, 00:05:01.150 "default_time2wait": 2, 00:05:01.150 "default_time2retain": 20, 00:05:01.150 "first_burst_length": 8192, 00:05:01.150 "immediate_data": true, 00:05:01.151 "allow_duplicated_isid": false, 00:05:01.151 "error_recovery_level": 0, 00:05:01.151 "nop_timeout": 60, 00:05:01.151 "nop_in_interval": 30, 00:05:01.151 "disable_chap": false, 00:05:01.151 "require_chap": false, 00:05:01.151 "mutual_chap": false, 00:05:01.151 "chap_group": 0, 00:05:01.151 "max_large_datain_per_connection": 64, 00:05:01.151 "max_r2t_per_connection": 4, 00:05:01.151 "pdu_pool_size": 36864, 00:05:01.151 "immediate_data_pool_size": 16384, 00:05:01.151 "data_out_pool_size": 2048 00:05:01.151 } 00:05:01.151 } 00:05:01.151 ] 00:05:01.151 } 00:05:01.151 ] 00:05:01.151 } 00:05:01.151 15:13:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:01.151 15:13:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 62312 00:05:01.151 15:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 62312 ']' 00:05:01.151 15:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 62312 00:05:01.151 15:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:01.151 15:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:01.151 15:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62312 00:05:01.151 killing process with pid 62312 00:05:01.151 15:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:01.151 15:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:01.151 15:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62312' 00:05:01.151 15:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 62312 00:05:01.151 15:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 62312 00:05:03.054 15:13:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=62357 00:05:03.054 15:13:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:03.054 15:13:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:08.327 15:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 62357 00:05:08.327 15:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 62357 ']' 00:05:08.327 15:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 62357 00:05:08.327 15:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:08.327 15:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:08.327 15:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62357 00:05:08.327 killing process with pid 62357 00:05:08.327 15:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:08.327 15:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:08.327 15:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62357' 00:05:08.327 15:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 62357 00:05:08.327 15:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 62357 00:05:09.705 15:13:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:09.705 15:13:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:09.705 00:05:09.705 real 0m9.683s 00:05:09.705 user 0m9.343s 00:05:09.705 sys 0m0.680s 00:05:09.705 15:13:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.705 ************************************ 00:05:09.705 END TEST skip_rpc_with_json 00:05:09.705 ************************************ 00:05:09.705 15:13:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.705 15:13:23 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:09.705 15:13:23 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:09.705 15:13:23 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.705 15:13:23 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.705 15:13:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.705 ************************************ 00:05:09.705 START TEST skip_rpc_with_delay 00:05:09.705 ************************************ 00:05:09.705 15:13:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:09.705 15:13:23 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:09.705 15:13:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:09.705 15:13:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:09.705 15:13:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:09.705 15:13:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.705 15:13:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:09.705 15:13:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.705 15:13:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:09.705 15:13:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.705 15:13:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:09.705 15:13:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:09.705 15:13:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:09.705 [2024-07-11 15:13:23.297918] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:09.705 [2024-07-11 15:13:23.298162] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:09.964 15:13:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:09.964 15:13:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:09.964 15:13:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:09.964 15:13:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:09.964 00:05:09.964 real 0m0.177s 00:05:09.964 user 0m0.099s 00:05:09.964 sys 0m0.076s 00:05:09.964 15:13:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.964 ************************************ 00:05:09.964 END TEST skip_rpc_with_delay 00:05:09.964 ************************************ 00:05:09.964 15:13:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:09.964 15:13:23 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:09.964 15:13:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:09.964 15:13:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:09.964 15:13:23 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:09.964 15:13:23 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.964 15:13:23 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.964 15:13:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.964 ************************************ 00:05:09.964 START TEST exit_on_failed_rpc_init 00:05:09.964 ************************************ 00:05:09.964 15:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:09.964 15:13:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62485 00:05:09.964 15:13:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 62485 00:05:09.964 15:13:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.964 15:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 62485 ']' 00:05:09.964 15:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.964 15:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.964 15:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.964 15:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.964 15:13:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:09.964 [2024-07-11 15:13:23.536990] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:09.964 [2024-07-11 15:13:23.537193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62485 ] 00:05:10.223 [2024-07-11 15:13:23.705982] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.481 [2024-07-11 15:13:23.855603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.049 15:13:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.049 15:13:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:11.049 15:13:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.049 15:13:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:11.049 15:13:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:11.049 15:13:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:11.049 15:13:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:11.049 15:13:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.049 15:13:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:11.049 15:13:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.049 15:13:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:11.049 15:13:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:11.049 15:13:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:11.049 15:13:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:11.049 15:13:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:11.049 [2024-07-11 15:13:24.573320] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:11.050 [2024-07-11 15:13:24.573505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62503 ] 00:05:11.308 [2024-07-11 15:13:24.750578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.567 [2024-07-11 15:13:24.973820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.567 [2024-07-11 15:13:24.973916] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:11.567 [2024-07-11 15:13:24.973938] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:11.567 [2024-07-11 15:13:24.973953] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:11.841 15:13:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:11.841 15:13:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:11.841 15:13:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:11.841 15:13:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:11.841 15:13:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:11.841 15:13:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:11.841 15:13:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:11.841 15:13:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 62485 00:05:11.841 15:13:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 62485 ']' 00:05:11.841 15:13:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 62485 00:05:11.841 15:13:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:11.841 15:13:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:11.841 15:13:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62485 00:05:11.841 killing process with pid 62485 00:05:11.841 15:13:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:11.841 15:13:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:11.841 15:13:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62485' 00:05:11.841 15:13:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 62485 00:05:11.841 15:13:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 62485 00:05:13.779 00:05:13.779 real 0m3.706s 00:05:13.779 user 0m4.344s 00:05:13.779 sys 0m0.500s 00:05:13.779 15:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.779 15:13:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:13.779 ************************************ 00:05:13.779 END TEST exit_on_failed_rpc_init 00:05:13.779 ************************************ 00:05:13.779 15:13:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:13.779 15:13:27 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:13.779 ************************************ 00:05:13.779 END TEST skip_rpc 00:05:13.779 ************************************ 00:05:13.779 00:05:13.779 real 0m20.618s 00:05:13.779 user 0m20.209s 00:05:13.779 sys 0m1.751s 00:05:13.779 15:13:27 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.779 15:13:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.779 15:13:27 -- common/autotest_common.sh@1142 -- # return 0 00:05:13.779 15:13:27 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:13.779 15:13:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.779 15:13:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.779 15:13:27 -- common/autotest_common.sh@10 -- # set +x 00:05:13.779 ************************************ 00:05:13.779 START TEST rpc_client 00:05:13.779 ************************************ 00:05:13.779 15:13:27 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:13.779 * Looking for test storage... 00:05:13.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:13.779 15:13:27 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:13.779 OK 00:05:13.779 15:13:27 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:13.779 00:05:13.779 real 0m0.148s 00:05:13.779 user 0m0.062s 00:05:13.779 sys 0m0.092s 00:05:13.779 15:13:27 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.779 ************************************ 00:05:13.779 END TEST rpc_client 00:05:13.779 ************************************ 00:05:13.779 15:13:27 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:14.039 15:13:27 -- common/autotest_common.sh@1142 -- # return 0 00:05:14.039 15:13:27 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:14.039 15:13:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.039 15:13:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.039 15:13:27 -- common/autotest_common.sh@10 -- # set +x 00:05:14.039 ************************************ 00:05:14.039 START TEST json_config 00:05:14.039 ************************************ 00:05:14.039 15:13:27 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:14.039 15:13:27 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e52e2e0-5ec1-4d08-b2ca-1e4c6bc2e59a 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2e52e2e0-5ec1-4d08-b2ca-1e4c6bc2e59a 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:14.039 15:13:27 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:14.039 15:13:27 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:14.039 15:13:27 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:14.039 15:13:27 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.039 15:13:27 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.039 15:13:27 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.039 15:13:27 json_config -- paths/export.sh@5 -- # export PATH 00:05:14.039 15:13:27 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@47 -- # : 0 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:14.039 15:13:27 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:14.039 15:13:27 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:14.039 15:13:27 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:14.039 15:13:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:14.039 15:13:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:14.039 15:13:27 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:14.040 15:13:27 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:14.040 WARNING: No tests are enabled so not running JSON configuration tests 00:05:14.040 15:13:27 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:14.040 00:05:14.040 real 0m0.084s 00:05:14.040 user 0m0.035s 00:05:14.040 sys 0m0.045s 00:05:14.040 15:13:27 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.040 15:13:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.040 ************************************ 00:05:14.040 END TEST json_config 00:05:14.040 ************************************ 00:05:14.040 15:13:27 -- common/autotest_common.sh@1142 -- # return 0 00:05:14.040 15:13:27 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:14.040 15:13:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.040 15:13:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.040 15:13:27 -- common/autotest_common.sh@10 -- # set +x 00:05:14.040 ************************************ 00:05:14.040 START TEST json_config_extra_key 00:05:14.040 ************************************ 00:05:14.040 15:13:27 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:14.040 15:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e52e2e0-5ec1-4d08-b2ca-1e4c6bc2e59a 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2e52e2e0-5ec1-4d08-b2ca-1e4c6bc2e59a 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:14.040 15:13:27 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:14.040 15:13:27 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:14.040 15:13:27 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:14.040 15:13:27 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.040 15:13:27 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.040 15:13:27 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.040 15:13:27 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:14.040 15:13:27 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:14.040 15:13:27 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:14.040 15:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:14.040 15:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:14.040 15:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:14.040 15:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:14.040 15:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:14.040 15:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:14.040 15:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:14.040 15:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:14.040 15:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:14.040 15:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:14.040 15:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:14.040 INFO: launching applications... 00:05:14.040 15:13:27 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:14.040 15:13:27 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:14.040 15:13:27 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:14.040 15:13:27 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:14.040 Waiting for target to run... 00:05:14.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.040 15:13:27 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:14.040 15:13:27 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:14.040 15:13:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.040 15:13:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.040 15:13:27 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=62678 00:05:14.040 15:13:27 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:14.040 15:13:27 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 62678 /var/tmp/spdk_tgt.sock 00:05:14.040 15:13:27 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 62678 ']' 00:05:14.040 15:13:27 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:14.040 15:13:27 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.040 15:13:27 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.040 15:13:27 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.040 15:13:27 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.040 15:13:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:14.300 [2024-07-11 15:13:27.740367] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:14.300 [2024-07-11 15:13:27.740506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62678 ] 00:05:14.559 [2024-07-11 15:13:28.046559] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.819 [2024-07-11 15:13:28.196947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.386 00:05:15.386 INFO: shutting down applications... 00:05:15.386 15:13:28 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.386 15:13:28 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:15.386 15:13:28 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:15.386 15:13:28 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:15.387 15:13:28 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:15.387 15:13:28 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:15.387 15:13:28 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:15.387 15:13:28 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 62678 ]] 00:05:15.387 15:13:28 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 62678 00:05:15.387 15:13:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:15.387 15:13:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.387 15:13:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62678 00:05:15.387 15:13:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.645 15:13:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.646 15:13:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.646 15:13:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62678 00:05:15.646 15:13:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:16.213 15:13:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:16.213 15:13:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.213 15:13:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62678 00:05:16.213 15:13:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:16.781 15:13:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:16.781 15:13:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.781 15:13:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62678 00:05:16.781 15:13:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.349 15:13:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.349 15:13:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.349 15:13:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62678 00:05:17.349 15:13:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.930 15:13:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.930 15:13:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.930 15:13:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62678 00:05:17.930 15:13:31 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:17.930 15:13:31 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:17.930 SPDK target shutdown done 00:05:17.930 15:13:31 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:17.930 15:13:31 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:17.930 Success 00:05:17.930 15:13:31 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:17.930 ************************************ 00:05:17.930 END TEST json_config_extra_key 00:05:17.930 ************************************ 00:05:17.930 00:05:17.930 real 0m3.702s 00:05:17.930 user 0m3.201s 00:05:17.930 sys 0m0.416s 00:05:17.930 15:13:31 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.930 15:13:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:17.930 15:13:31 -- common/autotest_common.sh@1142 -- # return 0 00:05:17.930 15:13:31 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:17.930 15:13:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.930 15:13:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.930 15:13:31 -- common/autotest_common.sh@10 -- # set +x 00:05:17.930 ************************************ 00:05:17.930 START TEST alias_rpc 00:05:17.930 ************************************ 00:05:17.930 15:13:31 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:17.930 * Looking for test storage... 00:05:17.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:17.930 15:13:31 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:17.930 15:13:31 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=62775 00:05:17.931 15:13:31 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.931 15:13:31 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 62775 00:05:17.931 15:13:31 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 62775 ']' 00:05:17.931 15:13:31 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.931 15:13:31 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.931 15:13:31 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.931 15:13:31 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.931 15:13:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.931 [2024-07-11 15:13:31.508314] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:17.931 [2024-07-11 15:13:31.509386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62775 ] 00:05:18.189 [2024-07-11 15:13:31.684534] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.447 [2024-07-11 15:13:31.840404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.014 15:13:32 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.014 15:13:32 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:19.014 15:13:32 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:19.272 15:13:32 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 62775 00:05:19.272 15:13:32 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 62775 ']' 00:05:19.272 15:13:32 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 62775 00:05:19.272 15:13:32 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:19.272 15:13:32 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.272 15:13:32 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62775 00:05:19.272 killing process with pid 62775 00:05:19.272 15:13:32 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.272 15:13:32 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.272 15:13:32 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62775' 00:05:19.272 15:13:32 alias_rpc -- common/autotest_common.sh@967 -- # kill 62775 00:05:19.272 15:13:32 alias_rpc -- common/autotest_common.sh@972 -- # wait 62775 00:05:21.173 ************************************ 00:05:21.173 END TEST alias_rpc 00:05:21.173 ************************************ 00:05:21.173 00:05:21.173 real 0m3.228s 00:05:21.174 user 0m3.468s 00:05:21.174 sys 0m0.433s 00:05:21.174 15:13:34 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.174 15:13:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.174 15:13:34 -- common/autotest_common.sh@1142 -- # return 0 00:05:21.174 15:13:34 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:21.174 15:13:34 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:21.174 15:13:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.174 15:13:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.174 15:13:34 -- common/autotest_common.sh@10 -- # set +x 00:05:21.174 ************************************ 00:05:21.174 START TEST spdkcli_tcp 00:05:21.174 ************************************ 00:05:21.174 15:13:34 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:21.174 * Looking for test storage... 00:05:21.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:21.174 15:13:34 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:21.174 15:13:34 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:21.174 15:13:34 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:21.174 15:13:34 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:21.174 15:13:34 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:21.174 15:13:34 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:21.174 15:13:34 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:21.174 15:13:34 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:21.174 15:13:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.174 15:13:34 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=62863 00:05:21.174 15:13:34 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 62863 00:05:21.174 15:13:34 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:21.174 15:13:34 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 62863 ']' 00:05:21.174 15:13:34 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.174 15:13:34 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.174 15:13:34 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.174 15:13:34 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.174 15:13:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.174 [2024-07-11 15:13:34.774634] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:21.174 [2024-07-11 15:13:34.774779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62863 ] 00:05:21.433 [2024-07-11 15:13:34.942143] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.691 [2024-07-11 15:13:35.103176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.691 [2024-07-11 15:13:35.103190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.258 15:13:35 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.258 15:13:35 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:22.258 15:13:35 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=62880 00:05:22.258 15:13:35 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:22.258 15:13:35 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:22.517 [ 00:05:22.517 "bdev_malloc_delete", 00:05:22.517 "bdev_malloc_create", 00:05:22.517 "bdev_null_resize", 00:05:22.517 "bdev_null_delete", 00:05:22.517 "bdev_null_create", 00:05:22.517 "bdev_nvme_cuse_unregister", 00:05:22.517 "bdev_nvme_cuse_register", 00:05:22.517 "bdev_opal_new_user", 00:05:22.517 "bdev_opal_set_lock_state", 00:05:22.517 "bdev_opal_delete", 00:05:22.517 "bdev_opal_get_info", 00:05:22.517 "bdev_opal_create", 00:05:22.517 "bdev_nvme_opal_revert", 00:05:22.517 "bdev_nvme_opal_init", 00:05:22.517 "bdev_nvme_send_cmd", 00:05:22.517 "bdev_nvme_get_path_iostat", 00:05:22.517 "bdev_nvme_get_mdns_discovery_info", 00:05:22.517 "bdev_nvme_stop_mdns_discovery", 00:05:22.517 "bdev_nvme_start_mdns_discovery", 00:05:22.517 "bdev_nvme_set_multipath_policy", 00:05:22.517 "bdev_nvme_set_preferred_path", 00:05:22.517 "bdev_nvme_get_io_paths", 00:05:22.517 "bdev_nvme_remove_error_injection", 00:05:22.517 "bdev_nvme_add_error_injection", 00:05:22.517 "bdev_nvme_get_discovery_info", 00:05:22.517 "bdev_nvme_stop_discovery", 00:05:22.517 "bdev_nvme_start_discovery", 00:05:22.517 "bdev_nvme_get_controller_health_info", 00:05:22.517 "bdev_nvme_disable_controller", 00:05:22.517 "bdev_nvme_enable_controller", 00:05:22.517 "bdev_nvme_reset_controller", 00:05:22.517 "bdev_nvme_get_transport_statistics", 00:05:22.517 "bdev_nvme_apply_firmware", 00:05:22.517 "bdev_nvme_detach_controller", 00:05:22.517 "bdev_nvme_get_controllers", 00:05:22.517 "bdev_nvme_attach_controller", 00:05:22.517 "bdev_nvme_set_hotplug", 00:05:22.517 "bdev_nvme_set_options", 00:05:22.517 "bdev_passthru_delete", 00:05:22.517 "bdev_passthru_create", 00:05:22.517 "bdev_lvol_set_parent_bdev", 00:05:22.517 "bdev_lvol_set_parent", 00:05:22.517 "bdev_lvol_check_shallow_copy", 00:05:22.517 "bdev_lvol_start_shallow_copy", 00:05:22.517 "bdev_lvol_grow_lvstore", 00:05:22.517 "bdev_lvol_get_lvols", 00:05:22.517 "bdev_lvol_get_lvstores", 00:05:22.517 "bdev_lvol_delete", 00:05:22.517 "bdev_lvol_set_read_only", 00:05:22.517 "bdev_lvol_resize", 00:05:22.517 "bdev_lvol_decouple_parent", 00:05:22.517 "bdev_lvol_inflate", 00:05:22.517 "bdev_lvol_rename", 00:05:22.517 "bdev_lvol_clone_bdev", 00:05:22.517 "bdev_lvol_clone", 00:05:22.517 "bdev_lvol_snapshot", 00:05:22.517 "bdev_lvol_create", 00:05:22.517 "bdev_lvol_delete_lvstore", 00:05:22.517 "bdev_lvol_rename_lvstore", 00:05:22.517 "bdev_lvol_create_lvstore", 00:05:22.517 "bdev_raid_set_options", 00:05:22.517 "bdev_raid_remove_base_bdev", 00:05:22.517 "bdev_raid_add_base_bdev", 00:05:22.517 "bdev_raid_delete", 00:05:22.517 "bdev_raid_create", 00:05:22.517 "bdev_raid_get_bdevs", 00:05:22.517 "bdev_error_inject_error", 00:05:22.517 "bdev_error_delete", 00:05:22.517 "bdev_error_create", 00:05:22.517 "bdev_split_delete", 00:05:22.517 "bdev_split_create", 00:05:22.517 "bdev_delay_delete", 00:05:22.517 "bdev_delay_create", 00:05:22.517 "bdev_delay_update_latency", 00:05:22.517 "bdev_zone_block_delete", 00:05:22.517 "bdev_zone_block_create", 00:05:22.517 "blobfs_create", 00:05:22.517 "blobfs_detect", 00:05:22.517 "blobfs_set_cache_size", 00:05:22.517 "bdev_xnvme_delete", 00:05:22.518 "bdev_xnvme_create", 00:05:22.518 "bdev_aio_delete", 00:05:22.518 "bdev_aio_rescan", 00:05:22.518 "bdev_aio_create", 00:05:22.518 "bdev_ftl_set_property", 00:05:22.518 "bdev_ftl_get_properties", 00:05:22.518 "bdev_ftl_get_stats", 00:05:22.518 "bdev_ftl_unmap", 00:05:22.518 "bdev_ftl_unload", 00:05:22.518 "bdev_ftl_delete", 00:05:22.518 "bdev_ftl_load", 00:05:22.518 "bdev_ftl_create", 00:05:22.518 "bdev_virtio_attach_controller", 00:05:22.518 "bdev_virtio_scsi_get_devices", 00:05:22.518 "bdev_virtio_detach_controller", 00:05:22.518 "bdev_virtio_blk_set_hotplug", 00:05:22.518 "bdev_iscsi_delete", 00:05:22.518 "bdev_iscsi_create", 00:05:22.518 "bdev_iscsi_set_options", 00:05:22.518 "accel_error_inject_error", 00:05:22.518 "ioat_scan_accel_module", 00:05:22.518 "dsa_scan_accel_module", 00:05:22.518 "iaa_scan_accel_module", 00:05:22.518 "keyring_file_remove_key", 00:05:22.518 "keyring_file_add_key", 00:05:22.518 "keyring_linux_set_options", 00:05:22.518 "iscsi_get_histogram", 00:05:22.518 "iscsi_enable_histogram", 00:05:22.518 "iscsi_set_options", 00:05:22.518 "iscsi_get_auth_groups", 00:05:22.518 "iscsi_auth_group_remove_secret", 00:05:22.518 "iscsi_auth_group_add_secret", 00:05:22.518 "iscsi_delete_auth_group", 00:05:22.518 "iscsi_create_auth_group", 00:05:22.518 "iscsi_set_discovery_auth", 00:05:22.518 "iscsi_get_options", 00:05:22.518 "iscsi_target_node_request_logout", 00:05:22.518 "iscsi_target_node_set_redirect", 00:05:22.518 "iscsi_target_node_set_auth", 00:05:22.518 "iscsi_target_node_add_lun", 00:05:22.518 "iscsi_get_stats", 00:05:22.518 "iscsi_get_connections", 00:05:22.518 "iscsi_portal_group_set_auth", 00:05:22.518 "iscsi_start_portal_group", 00:05:22.518 "iscsi_delete_portal_group", 00:05:22.518 "iscsi_create_portal_group", 00:05:22.518 "iscsi_get_portal_groups", 00:05:22.518 "iscsi_delete_target_node", 00:05:22.518 "iscsi_target_node_remove_pg_ig_maps", 00:05:22.518 "iscsi_target_node_add_pg_ig_maps", 00:05:22.518 "iscsi_create_target_node", 00:05:22.518 "iscsi_get_target_nodes", 00:05:22.518 "iscsi_delete_initiator_group", 00:05:22.518 "iscsi_initiator_group_remove_initiators", 00:05:22.518 "iscsi_initiator_group_add_initiators", 00:05:22.518 "iscsi_create_initiator_group", 00:05:22.518 "iscsi_get_initiator_groups", 00:05:22.518 "nvmf_set_crdt", 00:05:22.518 "nvmf_set_config", 00:05:22.518 "nvmf_set_max_subsystems", 00:05:22.518 "nvmf_stop_mdns_prr", 00:05:22.518 "nvmf_publish_mdns_prr", 00:05:22.518 "nvmf_subsystem_get_listeners", 00:05:22.518 "nvmf_subsystem_get_qpairs", 00:05:22.518 "nvmf_subsystem_get_controllers", 00:05:22.518 "nvmf_get_stats", 00:05:22.518 "nvmf_get_transports", 00:05:22.518 "nvmf_create_transport", 00:05:22.518 "nvmf_get_targets", 00:05:22.518 "nvmf_delete_target", 00:05:22.518 "nvmf_create_target", 00:05:22.518 "nvmf_subsystem_allow_any_host", 00:05:22.518 "nvmf_subsystem_remove_host", 00:05:22.518 "nvmf_subsystem_add_host", 00:05:22.518 "nvmf_ns_remove_host", 00:05:22.518 "nvmf_ns_add_host", 00:05:22.518 "nvmf_subsystem_remove_ns", 00:05:22.518 "nvmf_subsystem_add_ns", 00:05:22.518 "nvmf_subsystem_listener_set_ana_state", 00:05:22.518 "nvmf_discovery_get_referrals", 00:05:22.518 "nvmf_discovery_remove_referral", 00:05:22.518 "nvmf_discovery_add_referral", 00:05:22.518 "nvmf_subsystem_remove_listener", 00:05:22.518 "nvmf_subsystem_add_listener", 00:05:22.518 "nvmf_delete_subsystem", 00:05:22.518 "nvmf_create_subsystem", 00:05:22.518 "nvmf_get_subsystems", 00:05:22.518 "env_dpdk_get_mem_stats", 00:05:22.518 "nbd_get_disks", 00:05:22.518 "nbd_stop_disk", 00:05:22.518 "nbd_start_disk", 00:05:22.518 "ublk_recover_disk", 00:05:22.518 "ublk_get_disks", 00:05:22.518 "ublk_stop_disk", 00:05:22.518 "ublk_start_disk", 00:05:22.518 "ublk_destroy_target", 00:05:22.518 "ublk_create_target", 00:05:22.518 "virtio_blk_create_transport", 00:05:22.518 "virtio_blk_get_transports", 00:05:22.518 "vhost_controller_set_coalescing", 00:05:22.518 "vhost_get_controllers", 00:05:22.518 "vhost_delete_controller", 00:05:22.518 "vhost_create_blk_controller", 00:05:22.518 "vhost_scsi_controller_remove_target", 00:05:22.518 "vhost_scsi_controller_add_target", 00:05:22.518 "vhost_start_scsi_controller", 00:05:22.518 "vhost_create_scsi_controller", 00:05:22.518 "thread_set_cpumask", 00:05:22.518 "framework_get_governor", 00:05:22.518 "framework_get_scheduler", 00:05:22.518 "framework_set_scheduler", 00:05:22.518 "framework_get_reactors", 00:05:22.518 "thread_get_io_channels", 00:05:22.518 "thread_get_pollers", 00:05:22.518 "thread_get_stats", 00:05:22.518 "framework_monitor_context_switch", 00:05:22.518 "spdk_kill_instance", 00:05:22.518 "log_enable_timestamps", 00:05:22.518 "log_get_flags", 00:05:22.518 "log_clear_flag", 00:05:22.518 "log_set_flag", 00:05:22.518 "log_get_level", 00:05:22.518 "log_set_level", 00:05:22.518 "log_get_print_level", 00:05:22.518 "log_set_print_level", 00:05:22.518 "framework_enable_cpumask_locks", 00:05:22.518 "framework_disable_cpumask_locks", 00:05:22.518 "framework_wait_init", 00:05:22.518 "framework_start_init", 00:05:22.518 "scsi_get_devices", 00:05:22.518 "bdev_get_histogram", 00:05:22.518 "bdev_enable_histogram", 00:05:22.518 "bdev_set_qos_limit", 00:05:22.518 "bdev_set_qd_sampling_period", 00:05:22.518 "bdev_get_bdevs", 00:05:22.518 "bdev_reset_iostat", 00:05:22.518 "bdev_get_iostat", 00:05:22.518 "bdev_examine", 00:05:22.518 "bdev_wait_for_examine", 00:05:22.518 "bdev_set_options", 00:05:22.518 "notify_get_notifications", 00:05:22.518 "notify_get_types", 00:05:22.518 "accel_get_stats", 00:05:22.518 "accel_set_options", 00:05:22.518 "accel_set_driver", 00:05:22.518 "accel_crypto_key_destroy", 00:05:22.518 "accel_crypto_keys_get", 00:05:22.518 "accel_crypto_key_create", 00:05:22.518 "accel_assign_opc", 00:05:22.518 "accel_get_module_info", 00:05:22.518 "accel_get_opc_assignments", 00:05:22.518 "vmd_rescan", 00:05:22.518 "vmd_remove_device", 00:05:22.518 "vmd_enable", 00:05:22.518 "sock_get_default_impl", 00:05:22.518 "sock_set_default_impl", 00:05:22.518 "sock_impl_set_options", 00:05:22.518 "sock_impl_get_options", 00:05:22.518 "iobuf_get_stats", 00:05:22.518 "iobuf_set_options", 00:05:22.518 "framework_get_pci_devices", 00:05:22.518 "framework_get_config", 00:05:22.518 "framework_get_subsystems", 00:05:22.518 "trace_get_info", 00:05:22.518 "trace_get_tpoint_group_mask", 00:05:22.518 "trace_disable_tpoint_group", 00:05:22.518 "trace_enable_tpoint_group", 00:05:22.518 "trace_clear_tpoint_mask", 00:05:22.518 "trace_set_tpoint_mask", 00:05:22.518 "keyring_get_keys", 00:05:22.518 "spdk_get_version", 00:05:22.518 "rpc_get_methods" 00:05:22.518 ] 00:05:22.518 15:13:36 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:22.518 15:13:36 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:22.518 15:13:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.518 15:13:36 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:22.518 15:13:36 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 62863 00:05:22.518 15:13:36 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 62863 ']' 00:05:22.518 15:13:36 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 62863 00:05:22.518 15:13:36 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:22.518 15:13:36 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.518 15:13:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62863 00:05:22.518 killing process with pid 62863 00:05:22.518 15:13:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:22.518 15:13:36 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:22.518 15:13:36 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62863' 00:05:22.518 15:13:36 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 62863 00:05:22.518 15:13:36 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 62863 00:05:24.421 00:05:24.421 real 0m3.307s 00:05:24.421 user 0m5.984s 00:05:24.421 sys 0m0.477s 00:05:24.421 15:13:37 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.421 ************************************ 00:05:24.421 END TEST spdkcli_tcp 00:05:24.421 ************************************ 00:05:24.421 15:13:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.421 15:13:37 -- common/autotest_common.sh@1142 -- # return 0 00:05:24.421 15:13:37 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.421 15:13:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.421 15:13:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.421 15:13:37 -- common/autotest_common.sh@10 -- # set +x 00:05:24.421 ************************************ 00:05:24.421 START TEST dpdk_mem_utility 00:05:24.421 ************************************ 00:05:24.421 15:13:37 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.421 * Looking for test storage... 00:05:24.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:24.680 15:13:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:24.680 15:13:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=62966 00:05:24.680 15:13:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 62966 00:05:24.680 15:13:38 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 62966 ']' 00:05:24.680 15:13:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.680 15:13:38 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.680 15:13:38 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.681 15:13:38 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.681 15:13:38 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.681 15:13:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:24.681 [2024-07-11 15:13:38.165801] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:24.681 [2024-07-11 15:13:38.165978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62966 ] 00:05:24.940 [2024-07-11 15:13:38.337246] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.940 [2024-07-11 15:13:38.490907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.548 15:13:39 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.548 15:13:39 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:25.548 15:13:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:25.548 15:13:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:25.548 15:13:39 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:25.548 15:13:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.548 { 00:05:25.548 "filename": "/tmp/spdk_mem_dump.txt" 00:05:25.548 } 00:05:25.548 15:13:39 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:25.548 15:13:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:25.548 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:25.548 1 heaps totaling size 820.000000 MiB 00:05:25.548 size: 820.000000 MiB heap id: 0 00:05:25.548 end heaps---------- 00:05:25.548 8 mempools totaling size 598.116089 MiB 00:05:25.548 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:25.548 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:25.548 size: 84.521057 MiB name: bdev_io_62966 00:05:25.548 size: 51.011292 MiB name: evtpool_62966 00:05:25.548 size: 50.003479 MiB name: msgpool_62966 00:05:25.548 size: 21.763794 MiB name: PDU_Pool 00:05:25.548 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:25.548 size: 0.026123 MiB name: Session_Pool 00:05:25.548 end mempools------- 00:05:25.548 6 memzones totaling size 4.142822 MiB 00:05:25.548 size: 1.000366 MiB name: RG_ring_0_62966 00:05:25.548 size: 1.000366 MiB name: RG_ring_1_62966 00:05:25.548 size: 1.000366 MiB name: RG_ring_4_62966 00:05:25.548 size: 1.000366 MiB name: RG_ring_5_62966 00:05:25.548 size: 0.125366 MiB name: RG_ring_2_62966 00:05:25.548 size: 0.015991 MiB name: RG_ring_3_62966 00:05:25.548 end memzones------- 00:05:25.548 15:13:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:25.809 heap id: 0 total size: 820.000000 MiB number of busy elements: 301 number of free elements: 18 00:05:25.809 list of free elements. size: 18.451294 MiB 00:05:25.809 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:25.809 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:25.809 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:25.809 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:25.809 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:25.809 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:25.809 element at address: 0x200019600000 with size: 0.999084 MiB 00:05:25.809 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:25.809 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:25.809 element at address: 0x200018e00000 with size: 0.959656 MiB 00:05:25.809 element at address: 0x200019900040 with size: 0.936401 MiB 00:05:25.809 element at address: 0x200000200000 with size: 0.829956 MiB 00:05:25.809 element at address: 0x20001b000000 with size: 0.563904 MiB 00:05:25.809 element at address: 0x200019200000 with size: 0.487976 MiB 00:05:25.809 element at address: 0x200019a00000 with size: 0.485413 MiB 00:05:25.809 element at address: 0x200013800000 with size: 0.467896 MiB 00:05:25.809 element at address: 0x200028400000 with size: 0.390442 MiB 00:05:25.809 element at address: 0x200003a00000 with size: 0.351990 MiB 00:05:25.809 list of standard malloc elements. size: 199.284302 MiB 00:05:25.809 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:25.809 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:25.809 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:25.809 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:25.809 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:25.809 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:25.809 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:25.809 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:25.809 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:05:25.809 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:05:25.809 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:05:25.809 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:25.809 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:25.809 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:05:25.809 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:05:25.809 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:05:25.809 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:05:25.809 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:05:25.809 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:05:25.809 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:05:25.809 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:05:25.809 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:05:25.809 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:05:25.809 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:05:25.809 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:05:25.809 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:05:25.809 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:05:25.809 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:05:25.809 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:05:25.809 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:05:25.809 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:25.810 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:25.810 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:25.810 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:05:25.810 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:05:25.810 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:05:25.810 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:05:25.810 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:05:25.810 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:05:25.810 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:05:25.810 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:05:25.810 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:05:25.810 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:05:25.810 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:25.810 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:25.810 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:25.810 element at address: 0x200013877c80 with size: 0.000244 MiB 00:05:25.810 element at address: 0x200013877d80 with size: 0.000244 MiB 00:05:25.810 element at address: 0x200013877e80 with size: 0.000244 MiB 00:05:25.810 element at address: 0x200013877f80 with size: 0.000244 MiB 00:05:25.810 element at address: 0x200013878080 with size: 0.000244 MiB 00:05:25.810 element at address: 0x200013878180 with size: 0.000244 MiB 00:05:25.810 element at address: 0x200013878280 with size: 0.000244 MiB 00:05:25.810 element at address: 0x200013878380 with size: 0.000244 MiB 00:05:25.810 element at address: 0x200013878480 with size: 0.000244 MiB 00:05:25.810 element at address: 0x200013878580 with size: 0.000244 MiB 00:05:25.810 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:25.810 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:05:25.810 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x200019abc680 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:05:25.810 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:05:25.811 element at address: 0x200028463f40 with size: 0.000244 MiB 00:05:25.811 element at address: 0x200028464040 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846af80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846b080 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846b180 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846b280 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846b380 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846b480 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846b580 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846b680 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846b780 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846b880 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846b980 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846be80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846c080 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846c180 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846c280 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846c380 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846c480 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846c580 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846c680 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846c780 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846c880 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846c980 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846d080 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846d180 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846d280 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846d380 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846d480 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846d580 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846d680 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846d780 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846d880 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846d980 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846da80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846db80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846de80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846df80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846e080 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846e180 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846e280 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846e380 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846e480 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846e580 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846e680 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846e780 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846e880 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846e980 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846f080 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846f180 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846f280 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846f380 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846f480 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846f580 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846f680 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846f780 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846f880 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846f980 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:05:25.811 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:05:25.811 list of memzone associated elements. size: 602.264404 MiB 00:05:25.811 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:25.811 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:25.811 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:25.811 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:25.811 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:25.811 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_62966_0 00:05:25.811 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:25.811 associated memzone info: size: 48.002930 MiB name: MP_evtpool_62966_0 00:05:25.811 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:25.811 associated memzone info: size: 48.002930 MiB name: MP_msgpool_62966_0 00:05:25.811 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:25.811 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:25.811 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:25.811 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:25.811 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:25.811 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_62966 00:05:25.811 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:25.811 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_62966 00:05:25.811 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:25.811 associated memzone info: size: 1.007996 MiB name: MP_evtpool_62966 00:05:25.811 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:25.812 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:25.812 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:25.812 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:25.812 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:25.812 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:25.812 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:25.812 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:25.812 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:25.812 associated memzone info: size: 1.000366 MiB name: RG_ring_0_62966 00:05:25.812 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:25.812 associated memzone info: size: 1.000366 MiB name: RG_ring_1_62966 00:05:25.812 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:25.812 associated memzone info: size: 1.000366 MiB name: RG_ring_4_62966 00:05:25.812 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:25.812 associated memzone info: size: 1.000366 MiB name: RG_ring_5_62966 00:05:25.812 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:25.812 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_62966 00:05:25.812 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:05:25.812 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:25.812 element at address: 0x200013878680 with size: 0.500549 MiB 00:05:25.812 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:25.812 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:05:25.812 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:25.812 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:25.812 associated memzone info: size: 0.125366 MiB name: RG_ring_2_62966 00:05:25.812 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:05:25.812 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:25.812 element at address: 0x200028464140 with size: 0.023804 MiB 00:05:25.812 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:25.812 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:25.812 associated memzone info: size: 0.015991 MiB name: RG_ring_3_62966 00:05:25.812 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:05:25.812 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:25.812 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:05:25.812 associated memzone info: size: 0.000183 MiB name: MP_msgpool_62966 00:05:25.812 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:25.812 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_62966 00:05:25.812 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:05:25.812 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:25.812 15:13:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:25.812 15:13:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 62966 00:05:25.812 15:13:39 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 62966 ']' 00:05:25.812 15:13:39 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 62966 00:05:25.812 15:13:39 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:25.812 15:13:39 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.812 15:13:39 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62966 00:05:25.812 killing process with pid 62966 00:05:25.812 15:13:39 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:25.812 15:13:39 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:25.812 15:13:39 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62966' 00:05:25.812 15:13:39 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 62966 00:05:25.812 15:13:39 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 62966 00:05:27.718 ************************************ 00:05:27.718 END TEST dpdk_mem_utility 00:05:27.718 ************************************ 00:05:27.718 00:05:27.718 real 0m3.124s 00:05:27.718 user 0m3.187s 00:05:27.718 sys 0m0.436s 00:05:27.718 15:13:41 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.718 15:13:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:27.718 15:13:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:27.718 15:13:41 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:27.718 15:13:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.718 15:13:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.718 15:13:41 -- common/autotest_common.sh@10 -- # set +x 00:05:27.718 ************************************ 00:05:27.718 START TEST event 00:05:27.718 ************************************ 00:05:27.718 15:13:41 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:27.718 * Looking for test storage... 00:05:27.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:27.718 15:13:41 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:27.718 15:13:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:27.718 15:13:41 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:27.718 15:13:41 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:27.719 15:13:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.719 15:13:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.719 ************************************ 00:05:27.719 START TEST event_perf 00:05:27.719 ************************************ 00:05:27.719 15:13:41 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:27.719 Running I/O for 1 seconds...[2024-07-11 15:13:41.274400] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:27.719 [2024-07-11 15:13:41.274586] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63055 ] 00:05:27.978 [2024-07-11 15:13:41.445391] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:28.237 [2024-07-11 15:13:41.621629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.237 [2024-07-11 15:13:41.621709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.237 [2024-07-11 15:13:41.621860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.237 [2024-07-11 15:13:41.621873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.613 Running I/O for 1 seconds... 00:05:29.613 lcore 0: 201022 00:05:29.613 lcore 1: 201022 00:05:29.613 lcore 2: 201020 00:05:29.613 lcore 3: 201021 00:05:29.613 done. 00:05:29.613 00:05:29.613 real 0m1.732s 00:05:29.613 user 0m4.504s 00:05:29.613 sys 0m0.105s 00:05:29.613 15:13:42 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.613 15:13:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:29.613 ************************************ 00:05:29.613 END TEST event_perf 00:05:29.613 ************************************ 00:05:29.613 15:13:43 event -- common/autotest_common.sh@1142 -- # return 0 00:05:29.613 15:13:43 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:29.613 15:13:43 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:29.613 15:13:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.613 15:13:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.613 ************************************ 00:05:29.613 START TEST event_reactor 00:05:29.613 ************************************ 00:05:29.613 15:13:43 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:29.613 [2024-07-11 15:13:43.051649] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:29.613 [2024-07-11 15:13:43.051788] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63100 ] 00:05:29.613 [2024-07-11 15:13:43.206417] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.871 [2024-07-11 15:13:43.357929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.247 test_start 00:05:31.247 oneshot 00:05:31.247 tick 100 00:05:31.247 tick 100 00:05:31.247 tick 250 00:05:31.247 tick 100 00:05:31.247 tick 100 00:05:31.247 tick 100 00:05:31.247 tick 250 00:05:31.247 tick 500 00:05:31.247 tick 100 00:05:31.247 tick 100 00:05:31.247 tick 250 00:05:31.247 tick 100 00:05:31.247 tick 100 00:05:31.247 test_end 00:05:31.247 ************************************ 00:05:31.247 END TEST event_reactor 00:05:31.247 ************************************ 00:05:31.247 00:05:31.247 real 0m1.678s 00:05:31.247 user 0m1.476s 00:05:31.247 sys 0m0.093s 00:05:31.247 15:13:44 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.247 15:13:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:31.247 15:13:44 event -- common/autotest_common.sh@1142 -- # return 0 00:05:31.247 15:13:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.247 15:13:44 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:31.247 15:13:44 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.247 15:13:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.247 ************************************ 00:05:31.247 START TEST event_reactor_perf 00:05:31.247 ************************************ 00:05:31.247 15:13:44 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.247 [2024-07-11 15:13:44.788271] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:31.247 [2024-07-11 15:13:44.788430] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63137 ] 00:05:31.506 [2024-07-11 15:13:44.958047] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.506 [2024-07-11 15:13:45.119911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.880 test_start 00:05:32.880 test_end 00:05:32.880 Performance: 332849 events per second 00:05:32.880 ************************************ 00:05:32.880 END TEST event_reactor_perf 00:05:32.880 ************************************ 00:05:32.880 00:05:32.880 real 0m1.698s 00:05:32.880 user 0m1.488s 00:05:32.880 sys 0m0.100s 00:05:32.880 15:13:46 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.880 15:13:46 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.880 15:13:46 event -- common/autotest_common.sh@1142 -- # return 0 00:05:32.880 15:13:46 event -- event/event.sh@49 -- # uname -s 00:05:32.880 15:13:46 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:32.880 15:13:46 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:32.880 15:13:46 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.880 15:13:46 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.880 15:13:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.139 ************************************ 00:05:33.139 START TEST event_scheduler 00:05:33.139 ************************************ 00:05:33.139 15:13:46 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:33.139 * Looking for test storage... 00:05:33.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:33.139 15:13:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:33.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.139 15:13:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=63199 00:05:33.139 15:13:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:33.139 15:13:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.139 15:13:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 63199 00:05:33.139 15:13:46 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 63199 ']' 00:05:33.139 15:13:46 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.139 15:13:46 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.139 15:13:46 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.139 15:13:46 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.139 15:13:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.139 [2024-07-11 15:13:46.684320] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:33.139 [2024-07-11 15:13:46.684750] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63199 ] 00:05:33.398 [2024-07-11 15:13:46.859645] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:33.657 [2024-07-11 15:13:47.086125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.657 [2024-07-11 15:13:47.086281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.658 [2024-07-11 15:13:47.086404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.658 [2024-07-11 15:13:47.086422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.226 15:13:47 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.226 15:13:47 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:34.226 15:13:47 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:34.226 15:13:47 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.226 15:13:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.226 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:34.226 POWER: Cannot set governor of lcore 0 to userspace 00:05:34.226 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:34.226 POWER: Cannot set governor of lcore 0 to performance 00:05:34.226 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:34.226 POWER: Cannot set governor of lcore 0 to userspace 00:05:34.226 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:34.226 POWER: Cannot set governor of lcore 0 to userspace 00:05:34.226 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:34.226 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:34.226 POWER: Unable to set Power Management Environment for lcore 0 00:05:34.226 [2024-07-11 15:13:47.609865] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:34.226 [2024-07-11 15:13:47.609986] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:34.226 [2024-07-11 15:13:47.610152] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:34.226 [2024-07-11 15:13:47.610282] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:34.226 [2024-07-11 15:13:47.610395] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:34.226 [2024-07-11 15:13:47.610564] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:34.226 15:13:47 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.226 15:13:47 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:34.226 15:13:47 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.226 15:13:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.486 [2024-07-11 15:13:47.844193] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:34.486 15:13:47 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.486 15:13:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:34.486 15:13:47 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.486 15:13:47 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.486 15:13:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.486 ************************************ 00:05:34.486 START TEST scheduler_create_thread 00:05:34.486 ************************************ 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.486 2 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.486 3 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.486 4 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.486 5 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.486 6 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.486 7 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.486 8 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.486 9 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.486 10 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.486 15:13:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.424 15:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.424 15:13:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:35.424 15:13:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:35.424 15:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.424 15:13:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.803 ************************************ 00:05:36.803 END TEST scheduler_create_thread 00:05:36.803 ************************************ 00:05:36.803 15:13:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.803 00:05:36.803 real 0m2.137s 00:05:36.803 user 0m0.017s 00:05:36.803 sys 0m0.002s 00:05:36.803 15:13:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.803 15:13:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.803 15:13:50 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:36.803 15:13:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:36.803 15:13:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 63199 00:05:36.803 15:13:50 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 63199 ']' 00:05:36.803 15:13:50 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 63199 00:05:36.803 15:13:50 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:36.803 15:13:50 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.803 15:13:50 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63199 00:05:36.803 killing process with pid 63199 00:05:36.803 15:13:50 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:36.803 15:13:50 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:36.803 15:13:50 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63199' 00:05:36.803 15:13:50 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 63199 00:05:36.803 15:13:50 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 63199 00:05:37.062 [2024-07-11 15:13:50.473214] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:37.999 00:05:37.999 real 0m4.979s 00:05:37.999 user 0m8.248s 00:05:37.999 sys 0m0.389s 00:05:37.999 ************************************ 00:05:37.999 END TEST event_scheduler 00:05:37.999 ************************************ 00:05:37.999 15:13:51 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.999 15:13:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.999 15:13:51 event -- common/autotest_common.sh@1142 -- # return 0 00:05:37.999 15:13:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:37.999 15:13:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:37.999 15:13:51 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.999 15:13:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.999 15:13:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.999 ************************************ 00:05:37.999 START TEST app_repeat 00:05:37.999 ************************************ 00:05:37.999 15:13:51 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:37.999 15:13:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.999 15:13:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.999 15:13:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:37.999 15:13:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.999 15:13:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:38.000 15:13:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:38.000 15:13:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:38.000 Process app_repeat pid: 63305 00:05:38.000 15:13:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=63305 00:05:38.000 15:13:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.000 15:13:51 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:38.000 15:13:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63305' 00:05:38.000 15:13:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.000 spdk_app_start Round 0 00:05:38.000 15:13:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:38.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.000 15:13:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63305 /var/tmp/spdk-nbd.sock 00:05:38.000 15:13:51 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63305 ']' 00:05:38.000 15:13:51 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.000 15:13:51 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.000 15:13:51 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.000 15:13:51 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.000 15:13:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.000 [2024-07-11 15:13:51.610747] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:38.000 [2024-07-11 15:13:51.610929] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63305 ] 00:05:38.259 [2024-07-11 15:13:51.782542] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.518 [2024-07-11 15:13:51.943566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.518 [2024-07-11 15:13:51.943581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.085 15:13:52 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.085 15:13:52 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:39.085 15:13:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.344 Malloc0 00:05:39.344 15:13:52 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.602 Malloc1 00:05:39.602 15:13:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.602 15:13:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.602 15:13:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.602 15:13:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.602 15:13:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.602 15:13:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.602 15:13:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.602 15:13:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.602 15:13:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.602 15:13:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.602 15:13:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.602 15:13:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.602 15:13:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:39.602 15:13:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.602 15:13:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.602 15:13:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.860 /dev/nbd0 00:05:40.118 15:13:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.118 15:13:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.118 15:13:53 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:40.118 15:13:53 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:40.118 15:13:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:40.118 15:13:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:40.118 15:13:53 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:40.118 15:13:53 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:40.119 15:13:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:40.119 15:13:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:40.119 15:13:53 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.119 1+0 records in 00:05:40.119 1+0 records out 00:05:40.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307251 s, 13.3 MB/s 00:05:40.119 15:13:53 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.119 15:13:53 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:40.119 15:13:53 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.119 15:13:53 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:40.119 15:13:53 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:40.119 15:13:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.119 15:13:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.119 15:13:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.377 /dev/nbd1 00:05:40.377 15:13:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.377 15:13:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.377 15:13:53 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:40.377 15:13:53 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:40.377 15:13:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:40.377 15:13:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:40.377 15:13:53 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:40.377 15:13:53 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:40.377 15:13:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:40.377 15:13:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:40.377 15:13:53 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.377 1+0 records in 00:05:40.377 1+0 records out 00:05:40.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407478 s, 10.1 MB/s 00:05:40.377 15:13:53 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.377 15:13:53 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:40.377 15:13:53 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.377 15:13:53 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:40.377 15:13:53 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:40.377 15:13:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.377 15:13:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.377 15:13:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.377 15:13:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.377 15:13:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.636 { 00:05:40.636 "nbd_device": "/dev/nbd0", 00:05:40.636 "bdev_name": "Malloc0" 00:05:40.636 }, 00:05:40.636 { 00:05:40.636 "nbd_device": "/dev/nbd1", 00:05:40.636 "bdev_name": "Malloc1" 00:05:40.636 } 00:05:40.636 ]' 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.636 { 00:05:40.636 "nbd_device": "/dev/nbd0", 00:05:40.636 "bdev_name": "Malloc0" 00:05:40.636 }, 00:05:40.636 { 00:05:40.636 "nbd_device": "/dev/nbd1", 00:05:40.636 "bdev_name": "Malloc1" 00:05:40.636 } 00:05:40.636 ]' 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.636 /dev/nbd1' 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.636 /dev/nbd1' 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.636 256+0 records in 00:05:40.636 256+0 records out 00:05:40.636 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105988 s, 98.9 MB/s 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.636 256+0 records in 00:05:40.636 256+0 records out 00:05:40.636 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271965 s, 38.6 MB/s 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.636 256+0 records in 00:05:40.636 256+0 records out 00:05:40.636 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313014 s, 33.5 MB/s 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.636 15:13:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.895 15:13:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.895 15:13:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.895 15:13:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.895 15:13:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.895 15:13:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:40.895 15:13:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.895 15:13:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.154 15:13:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.154 15:13:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.154 15:13:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.154 15:13:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.154 15:13:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.154 15:13:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.154 15:13:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.154 15:13:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.154 15:13:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.154 15:13:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.413 15:13:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.413 15:13:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.413 15:13:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.413 15:13:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.413 15:13:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.413 15:13:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.413 15:13:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.413 15:13:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.413 15:13:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.413 15:13:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.413 15:13:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.672 15:13:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.672 15:13:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.672 15:13:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.672 15:13:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.672 15:13:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.672 15:13:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.672 15:13:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.672 15:13:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.672 15:13:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.672 15:13:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.672 15:13:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.672 15:13:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.672 15:13:55 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.931 15:13:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:43.306 [2024-07-11 15:13:56.547429] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.306 [2024-07-11 15:13:56.691550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.306 [2024-07-11 15:13:56.691554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.306 [2024-07-11 15:13:56.831962] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:43.306 [2024-07-11 15:13:56.832048] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.229 spdk_app_start Round 1 00:05:45.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.229 15:13:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.229 15:13:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:45.229 15:13:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63305 /var/tmp/spdk-nbd.sock 00:05:45.229 15:13:58 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63305 ']' 00:05:45.229 15:13:58 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.229 15:13:58 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.229 15:13:58 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.229 15:13:58 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.229 15:13:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.229 15:13:58 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.229 15:13:58 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:45.229 15:13:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.488 Malloc0 00:05:45.488 15:13:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.746 Malloc1 00:05:45.746 15:13:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.746 15:13:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.746 15:13:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.746 15:13:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.746 15:13:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.746 15:13:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.747 15:13:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.747 15:13:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.747 15:13:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.747 15:13:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.747 15:13:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.747 15:13:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.747 15:13:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:45.747 15:13:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.747 15:13:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.747 15:13:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.005 /dev/nbd0 00:05:46.005 15:13:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.005 15:13:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.005 15:13:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:46.005 15:13:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:46.005 15:13:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:46.005 15:13:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:46.005 15:13:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:46.005 15:13:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:46.005 15:13:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:46.005 15:13:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:46.005 15:13:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.005 1+0 records in 00:05:46.005 1+0 records out 00:05:46.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292717 s, 14.0 MB/s 00:05:46.005 15:13:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.005 15:13:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:46.005 15:13:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.005 15:13:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:46.005 15:13:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:46.005 15:13:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.005 15:13:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.005 15:13:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.265 /dev/nbd1 00:05:46.265 15:13:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.265 15:13:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.265 15:13:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:46.265 15:13:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:46.265 15:13:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:46.265 15:13:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:46.265 15:13:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:46.523 15:13:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:46.523 15:13:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:46.523 15:13:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:46.523 15:13:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.523 1+0 records in 00:05:46.523 1+0 records out 00:05:46.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603298 s, 6.8 MB/s 00:05:46.523 15:13:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.523 15:13:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:46.523 15:13:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.523 15:13:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:46.523 15:13:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:46.523 15:13:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.523 15:13:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.523 15:13:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.523 15:13:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.523 15:13:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.782 15:14:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.782 { 00:05:46.782 "nbd_device": "/dev/nbd0", 00:05:46.782 "bdev_name": "Malloc0" 00:05:46.782 }, 00:05:46.782 { 00:05:46.782 "nbd_device": "/dev/nbd1", 00:05:46.782 "bdev_name": "Malloc1" 00:05:46.782 } 00:05:46.782 ]' 00:05:46.782 15:14:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.782 { 00:05:46.782 "nbd_device": "/dev/nbd0", 00:05:46.782 "bdev_name": "Malloc0" 00:05:46.782 }, 00:05:46.782 { 00:05:46.782 "nbd_device": "/dev/nbd1", 00:05:46.782 "bdev_name": "Malloc1" 00:05:46.782 } 00:05:46.782 ]' 00:05:46.782 15:14:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.782 15:14:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.782 /dev/nbd1' 00:05:46.782 15:14:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.782 /dev/nbd1' 00:05:46.782 15:14:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.782 15:14:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.782 15:14:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.782 15:14:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.782 15:14:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.782 15:14:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.783 256+0 records in 00:05:46.783 256+0 records out 00:05:46.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104265 s, 101 MB/s 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.783 256+0 records in 00:05:46.783 256+0 records out 00:05:46.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252906 s, 41.5 MB/s 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.783 256+0 records in 00:05:46.783 256+0 records out 00:05:46.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0354867 s, 29.5 MB/s 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.783 15:14:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.041 15:14:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.041 15:14:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.041 15:14:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.041 15:14:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.041 15:14:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.041 15:14:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.041 15:14:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.041 15:14:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.041 15:14:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.042 15:14:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.301 15:14:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.301 15:14:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.301 15:14:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.301 15:14:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.301 15:14:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.301 15:14:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.301 15:14:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.301 15:14:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.301 15:14:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.301 15:14:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.301 15:14:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.559 15:14:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.559 15:14:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.559 15:14:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.817 15:14:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.817 15:14:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.817 15:14:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.817 15:14:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:47.817 15:14:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.817 15:14:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.817 15:14:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.817 15:14:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.817 15:14:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.817 15:14:01 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.076 15:14:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:49.011 [2024-07-11 15:14:02.540075] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.270 [2024-07-11 15:14:02.686829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.270 [2024-07-11 15:14:02.686829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.270 [2024-07-11 15:14:02.827322] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:49.270 [2024-07-11 15:14:02.827420] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.173 spdk_app_start Round 2 00:05:51.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.173 15:14:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:51.173 15:14:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:51.173 15:14:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63305 /var/tmp/spdk-nbd.sock 00:05:51.173 15:14:04 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63305 ']' 00:05:51.173 15:14:04 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.173 15:14:04 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.173 15:14:04 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.173 15:14:04 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.173 15:14:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.432 15:14:04 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.432 15:14:04 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:51.432 15:14:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.691 Malloc0 00:05:51.691 15:14:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.950 Malloc1 00:05:51.950 15:14:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.950 15:14:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.950 15:14:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.950 15:14:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.950 15:14:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.950 15:14:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.950 15:14:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.950 15:14:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.950 15:14:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.950 15:14:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.950 15:14:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.950 15:14:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.950 15:14:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:51.950 15:14:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.950 15:14:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.950 15:14:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.209 /dev/nbd0 00:05:52.209 15:14:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.209 15:14:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.209 15:14:05 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:52.209 15:14:05 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:52.209 15:14:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:52.209 15:14:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:52.209 15:14:05 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:52.209 15:14:05 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:52.209 15:14:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:52.209 15:14:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:52.209 15:14:05 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.209 1+0 records in 00:05:52.209 1+0 records out 00:05:52.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408283 s, 10.0 MB/s 00:05:52.209 15:14:05 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.209 15:14:05 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:52.209 15:14:05 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.209 15:14:05 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:52.209 15:14:05 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:52.209 15:14:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.209 15:14:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.209 15:14:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.472 /dev/nbd1 00:05:52.472 15:14:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.472 15:14:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.472 15:14:05 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:52.472 15:14:05 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:52.472 15:14:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:52.472 15:14:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:52.472 15:14:05 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:52.472 15:14:05 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:52.472 15:14:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:52.472 15:14:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:52.472 15:14:05 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.472 1+0 records in 00:05:52.472 1+0 records out 00:05:52.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333956 s, 12.3 MB/s 00:05:52.472 15:14:05 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.472 15:14:05 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:52.472 15:14:05 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.472 15:14:05 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:52.472 15:14:05 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:52.472 15:14:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.472 15:14:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.472 15:14:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.472 15:14:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.472 15:14:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.740 15:14:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:52.740 { 00:05:52.740 "nbd_device": "/dev/nbd0", 00:05:52.740 "bdev_name": "Malloc0" 00:05:52.740 }, 00:05:52.740 { 00:05:52.740 "nbd_device": "/dev/nbd1", 00:05:52.740 "bdev_name": "Malloc1" 00:05:52.740 } 00:05:52.740 ]' 00:05:52.740 15:14:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:52.740 { 00:05:52.740 "nbd_device": "/dev/nbd0", 00:05:52.740 "bdev_name": "Malloc0" 00:05:52.740 }, 00:05:52.740 { 00:05:52.740 "nbd_device": "/dev/nbd1", 00:05:52.740 "bdev_name": "Malloc1" 00:05:52.740 } 00:05:52.740 ]' 00:05:52.740 15:14:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.740 15:14:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:52.740 /dev/nbd1' 00:05:52.740 15:14:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:52.740 /dev/nbd1' 00:05:52.740 15:14:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:52.741 256+0 records in 00:05:52.741 256+0 records out 00:05:52.741 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00594798 s, 176 MB/s 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:52.741 256+0 records in 00:05:52.741 256+0 records out 00:05:52.741 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261101 s, 40.2 MB/s 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:52.741 256+0 records in 00:05:52.741 256+0 records out 00:05:52.741 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0355081 s, 29.5 MB/s 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.741 15:14:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.999 15:14:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.999 15:14:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.999 15:14:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.999 15:14:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.999 15:14:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.999 15:14:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.999 15:14:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.999 15:14:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.999 15:14:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.999 15:14:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.258 15:14:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.258 15:14:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.258 15:14:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.258 15:14:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.258 15:14:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.258 15:14:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.258 15:14:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.258 15:14:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.258 15:14:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.258 15:14:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.258 15:14:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.516 15:14:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:53.516 15:14:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:53.516 15:14:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.516 15:14:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:53.516 15:14:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.516 15:14:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:53.516 15:14:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:53.516 15:14:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:53.516 15:14:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:53.516 15:14:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:53.516 15:14:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:53.516 15:14:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:53.516 15:14:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.083 15:14:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:55.019 [2024-07-11 15:14:08.607288] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.278 [2024-07-11 15:14:08.761044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.278 [2024-07-11 15:14:08.761054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.536 [2024-07-11 15:14:08.904885] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:55.536 [2024-07-11 15:14:08.904995] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.441 15:14:10 event.app_repeat -- event/event.sh@38 -- # waitforlisten 63305 /var/tmp/spdk-nbd.sock 00:05:57.441 15:14:10 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 63305 ']' 00:05:57.441 15:14:10 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.441 15:14:10 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.441 15:14:10 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.441 15:14:10 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.441 15:14:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.441 15:14:10 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.441 15:14:10 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:57.441 15:14:10 event.app_repeat -- event/event.sh@39 -- # killprocess 63305 00:05:57.441 15:14:10 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 63305 ']' 00:05:57.441 15:14:10 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 63305 00:05:57.441 15:14:10 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:57.441 15:14:10 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.441 15:14:10 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63305 00:05:57.441 killing process with pid 63305 00:05:57.441 15:14:10 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.441 15:14:10 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.441 15:14:10 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63305' 00:05:57.441 15:14:10 event.app_repeat -- common/autotest_common.sh@967 -- # kill 63305 00:05:57.441 15:14:10 event.app_repeat -- common/autotest_common.sh@972 -- # wait 63305 00:05:58.378 spdk_app_start is called in Round 0. 00:05:58.378 Shutdown signal received, stop current app iteration 00:05:58.378 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:05:58.378 spdk_app_start is called in Round 1. 00:05:58.378 Shutdown signal received, stop current app iteration 00:05:58.378 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:05:58.378 spdk_app_start is called in Round 2. 00:05:58.378 Shutdown signal received, stop current app iteration 00:05:58.378 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:05:58.378 spdk_app_start is called in Round 3. 00:05:58.378 Shutdown signal received, stop current app iteration 00:05:58.378 ************************************ 00:05:58.378 END TEST app_repeat 00:05:58.378 ************************************ 00:05:58.378 15:14:11 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:58.378 15:14:11 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:58.379 00:05:58.379 real 0m20.203s 00:05:58.379 user 0m43.827s 00:05:58.379 sys 0m2.558s 00:05:58.379 15:14:11 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.379 15:14:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.379 15:14:11 event -- common/autotest_common.sh@1142 -- # return 0 00:05:58.379 15:14:11 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:58.379 15:14:11 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:58.379 15:14:11 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.379 15:14:11 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.379 15:14:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.379 ************************************ 00:05:58.379 START TEST cpu_locks 00:05:58.379 ************************************ 00:05:58.379 15:14:11 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:58.379 * Looking for test storage... 00:05:58.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:58.379 15:14:11 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:58.379 15:14:11 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:58.379 15:14:11 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:58.379 15:14:11 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:58.379 15:14:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.379 15:14:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.379 15:14:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.379 ************************************ 00:05:58.379 START TEST default_locks 00:05:58.379 ************************************ 00:05:58.379 15:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:58.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.379 15:14:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=63752 00:05:58.379 15:14:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.379 15:14:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 63752 00:05:58.379 15:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 63752 ']' 00:05:58.379 15:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.379 15:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.379 15:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.379 15:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.379 15:14:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.638 [2024-07-11 15:14:11.996133] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:58.638 [2024-07-11 15:14:11.996552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63752 ] 00:05:58.638 [2024-07-11 15:14:12.157996] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.897 [2024-07-11 15:14:12.324934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.465 15:14:12 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.465 15:14:12 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:59.465 15:14:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 63752 00:05:59.465 15:14:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 63752 00:05:59.465 15:14:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.722 15:14:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 63752 00:05:59.723 15:14:13 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 63752 ']' 00:05:59.723 15:14:13 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 63752 00:05:59.723 15:14:13 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:59.723 15:14:13 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.723 15:14:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63752 00:05:59.723 killing process with pid 63752 00:05:59.723 15:14:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.723 15:14:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.723 15:14:13 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63752' 00:05:59.723 15:14:13 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 63752 00:05:59.723 15:14:13 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 63752 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 63752 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63752 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 63752 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 63752 ']' 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.624 ERROR: process (pid: 63752) is no longer running 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.624 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63752) - No such process 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:01.624 00:06:01.624 real 0m3.280s 00:06:01.624 user 0m3.400s 00:06:01.624 sys 0m0.520s 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.624 15:14:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.624 ************************************ 00:06:01.624 END TEST default_locks 00:06:01.624 ************************************ 00:06:01.624 15:14:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:01.624 15:14:15 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:01.624 15:14:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.624 15:14:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.624 15:14:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.624 ************************************ 00:06:01.624 START TEST default_locks_via_rpc 00:06:01.624 ************************************ 00:06:01.624 15:14:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:01.624 15:14:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=63824 00:06:01.624 15:14:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 63824 00:06:01.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.883 15:14:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63824 ']' 00:06:01.883 15:14:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.883 15:14:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.883 15:14:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.883 15:14:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.883 15:14:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.883 15:14:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.883 [2024-07-11 15:14:15.361950] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:01.883 [2024-07-11 15:14:15.362187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63824 ] 00:06:02.143 [2024-07-11 15:14:15.533257] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.143 [2024-07-11 15:14:15.695656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.711 15:14:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.711 15:14:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:02.711 15:14:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:02.711 15:14:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.711 15:14:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.711 15:14:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.711 15:14:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:02.711 15:14:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:02.711 15:14:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:02.711 15:14:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:02.711 15:14:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:02.711 15:14:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.711 15:14:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.711 15:14:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.711 15:14:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 63824 00:06:02.711 15:14:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 63824 00:06:02.711 15:14:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.279 15:14:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 63824 00:06:03.279 15:14:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 63824 ']' 00:06:03.279 15:14:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 63824 00:06:03.279 15:14:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:03.279 15:14:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.279 15:14:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63824 00:06:03.279 killing process with pid 63824 00:06:03.279 15:14:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.279 15:14:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.279 15:14:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63824' 00:06:03.279 15:14:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 63824 00:06:03.279 15:14:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 63824 00:06:05.183 00:06:05.183 real 0m3.284s 00:06:05.183 user 0m3.350s 00:06:05.183 sys 0m0.547s 00:06:05.183 15:14:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.183 ************************************ 00:06:05.183 END TEST default_locks_via_rpc 00:06:05.183 ************************************ 00:06:05.183 15:14:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.183 15:14:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:05.183 15:14:18 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:05.183 15:14:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.183 15:14:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.183 15:14:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.183 ************************************ 00:06:05.183 START TEST non_locking_app_on_locked_coremask 00:06:05.183 ************************************ 00:06:05.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.184 15:14:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:05.184 15:14:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63892 00:06:05.184 15:14:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 63892 /var/tmp/spdk.sock 00:06:05.184 15:14:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63892 ']' 00:06:05.184 15:14:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.184 15:14:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.184 15:14:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.184 15:14:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.184 15:14:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.184 15:14:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.184 [2024-07-11 15:14:18.700468] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:05.184 [2024-07-11 15:14:18.700667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63892 ] 00:06:05.442 [2024-07-11 15:14:18.870967] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.442 [2024-07-11 15:14:19.035496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.393 15:14:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.393 15:14:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:06.393 15:14:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63908 00:06:06.393 15:14:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 63908 /var/tmp/spdk2.sock 00:06:06.393 15:14:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63908 ']' 00:06:06.393 15:14:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:06.393 15:14:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.393 15:14:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.393 15:14:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.393 15:14:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.393 15:14:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.393 [2024-07-11 15:14:19.818787] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:06.393 [2024-07-11 15:14:19.819269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63908 ] 00:06:06.393 [2024-07-11 15:14:20.003762] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.393 [2024-07-11 15:14:20.003831] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.960 [2024-07-11 15:14:20.341280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.336 15:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.336 15:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:08.336 15:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 63892 00:06:08.336 15:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63892 00:06:08.336 15:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.904 15:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 63892 00:06:08.904 15:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63892 ']' 00:06:08.904 15:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63892 00:06:08.904 15:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:08.904 15:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:08.904 15:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63892 00:06:08.904 killing process with pid 63892 00:06:08.904 15:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:08.904 15:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:08.904 15:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63892' 00:06:08.904 15:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63892 00:06:08.904 15:14:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63892 00:06:13.093 15:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 63908 00:06:13.093 15:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63908 ']' 00:06:13.093 15:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63908 00:06:13.093 15:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:13.093 15:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.093 15:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63908 00:06:13.093 killing process with pid 63908 00:06:13.093 15:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.093 15:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.093 15:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63908' 00:06:13.093 15:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63908 00:06:13.093 15:14:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63908 00:06:14.470 ************************************ 00:06:14.470 END TEST non_locking_app_on_locked_coremask 00:06:14.470 ************************************ 00:06:14.470 00:06:14.470 real 0m9.262s 00:06:14.470 user 0m9.713s 00:06:14.470 sys 0m1.140s 00:06:14.470 15:14:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.470 15:14:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.470 15:14:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:14.470 15:14:27 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:14.470 15:14:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.470 15:14:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.470 15:14:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.470 ************************************ 00:06:14.470 START TEST locking_app_on_unlocked_coremask 00:06:14.470 ************************************ 00:06:14.470 15:14:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:14.470 15:14:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:14.470 15:14:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=64032 00:06:14.470 15:14:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 64032 /var/tmp/spdk.sock 00:06:14.470 15:14:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64032 ']' 00:06:14.470 15:14:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.470 15:14:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.470 15:14:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.470 15:14:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.470 15:14:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.470 [2024-07-11 15:14:28.023990] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:14.470 [2024-07-11 15:14:28.024189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64032 ] 00:06:14.729 [2024-07-11 15:14:28.192814] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.729 [2024-07-11 15:14:28.192870] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.729 [2024-07-11 15:14:28.338866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.666 15:14:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.666 15:14:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:15.666 15:14:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64048 00:06:15.666 15:14:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:15.666 15:14:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 64048 /var/tmp/spdk2.sock 00:06:15.666 15:14:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64048 ']' 00:06:15.666 15:14:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.666 15:14:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.666 15:14:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.666 15:14:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.666 15:14:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.666 [2024-07-11 15:14:29.054661] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:15.666 [2024-07-11 15:14:29.054811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64048 ] 00:06:15.666 [2024-07-11 15:14:29.219934] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.923 [2024-07-11 15:14:29.532742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.298 15:14:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.298 15:14:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:17.298 15:14:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 64048 00:06:17.298 15:14:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64048 00:06:17.298 15:14:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.232 15:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 64032 00:06:18.232 15:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64032 ']' 00:06:18.232 15:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64032 00:06:18.232 15:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:18.232 15:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.232 15:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64032 00:06:18.232 15:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.232 15:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.232 killing process with pid 64032 00:06:18.232 15:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64032' 00:06:18.232 15:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64032 00:06:18.232 15:14:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64032 00:06:22.461 15:14:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 64048 00:06:22.461 15:14:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64048 ']' 00:06:22.461 15:14:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 64048 00:06:22.461 15:14:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:22.461 15:14:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.461 15:14:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64048 00:06:22.461 killing process with pid 64048 00:06:22.461 15:14:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.461 15:14:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.461 15:14:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64048' 00:06:22.461 15:14:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 64048 00:06:22.461 15:14:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 64048 00:06:23.398 00:06:23.398 real 0m9.059s 00:06:23.398 user 0m9.464s 00:06:23.398 sys 0m1.141s 00:06:23.398 15:14:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.398 ************************************ 00:06:23.398 END TEST locking_app_on_unlocked_coremask 00:06:23.398 ************************************ 00:06:23.398 15:14:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.398 15:14:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:23.398 15:14:37 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:23.398 15:14:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.398 15:14:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.398 15:14:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.398 ************************************ 00:06:23.398 START TEST locking_app_on_locked_coremask 00:06:23.398 ************************************ 00:06:23.398 15:14:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:23.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.657 15:14:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64168 00:06:23.657 15:14:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.657 15:14:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 64168 /var/tmp/spdk.sock 00:06:23.657 15:14:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64168 ']' 00:06:23.657 15:14:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.657 15:14:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.657 15:14:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.657 15:14:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.657 15:14:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.657 [2024-07-11 15:14:37.131283] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:23.657 [2024-07-11 15:14:37.131714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64168 ] 00:06:23.916 [2024-07-11 15:14:37.301619] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.916 [2024-07-11 15:14:37.464417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.487 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.487 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:24.487 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:24.487 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64184 00:06:24.487 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64184 /var/tmp/spdk2.sock 00:06:24.487 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:24.487 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64184 /var/tmp/spdk2.sock 00:06:24.487 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:24.487 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.487 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:24.487 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.487 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64184 /var/tmp/spdk2.sock 00:06:24.487 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 64184 ']' 00:06:24.487 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.487 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.487 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.487 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.487 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.746 [2024-07-11 15:14:38.153610] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:24.746 [2024-07-11 15:14:38.154066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64184 ] 00:06:24.746 [2024-07-11 15:14:38.320221] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64168 has claimed it. 00:06:24.746 [2024-07-11 15:14:38.320319] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:25.315 ERROR: process (pid: 64184) is no longer running 00:06:25.315 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64184) - No such process 00:06:25.315 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.315 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:25.315 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:25.315 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:25.315 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:25.315 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:25.315 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 64168 00:06:25.315 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64168 00:06:25.315 15:14:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.883 15:14:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 64168 00:06:25.883 15:14:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 64168 ']' 00:06:25.883 15:14:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 64168 00:06:25.883 15:14:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:25.883 15:14:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.883 15:14:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64168 00:06:25.883 15:14:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.883 15:14:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.883 15:14:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64168' 00:06:25.883 killing process with pid 64168 00:06:25.883 15:14:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 64168 00:06:25.884 15:14:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 64168 00:06:27.786 ************************************ 00:06:27.786 END TEST locking_app_on_locked_coremask 00:06:27.786 ************************************ 00:06:27.786 00:06:27.786 real 0m4.000s 00:06:27.786 user 0m4.422s 00:06:27.786 sys 0m0.675s 00:06:27.786 15:14:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.786 15:14:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.786 15:14:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:27.786 15:14:41 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:27.786 15:14:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.786 15:14:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.786 15:14:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.786 ************************************ 00:06:27.786 START TEST locking_overlapped_coremask 00:06:27.786 ************************************ 00:06:27.786 15:14:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:27.786 15:14:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64248 00:06:27.786 15:14:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 64248 /var/tmp/spdk.sock 00:06:27.786 15:14:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64248 ']' 00:06:27.786 15:14:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:27.786 15:14:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.786 15:14:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.786 15:14:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.786 15:14:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.786 15:14:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.786 [2024-07-11 15:14:41.188451] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:27.786 [2024-07-11 15:14:41.188662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64248 ] 00:06:27.786 [2024-07-11 15:14:41.346412] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.045 [2024-07-11 15:14:41.517276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.045 [2024-07-11 15:14:41.517374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.045 [2024-07-11 15:14:41.517383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.614 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.614 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:28.614 15:14:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64266 00:06:28.614 15:14:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:28.614 15:14:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64266 /var/tmp/spdk2.sock 00:06:28.614 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:28.614 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 64266 /var/tmp/spdk2.sock 00:06:28.614 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:28.614 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.614 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:28.614 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:28.614 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 64266 /var/tmp/spdk2.sock 00:06:28.614 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 64266 ']' 00:06:28.614 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.614 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.614 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.614 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.614 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.873 [2024-07-11 15:14:42.285686] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:28.873 [2024-07-11 15:14:42.286434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64266 ] 00:06:28.873 [2024-07-11 15:14:42.461374] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64248 has claimed it. 00:06:28.873 [2024-07-11 15:14:42.461477] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:29.441 ERROR: process (pid: 64266) is no longer running 00:06:29.441 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (64266) - No such process 00:06:29.441 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.441 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:29.441 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:29.441 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:29.441 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:29.441 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:29.441 15:14:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:29.441 15:14:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:29.441 15:14:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:29.441 15:14:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:29.441 15:14:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 64248 00:06:29.441 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 64248 ']' 00:06:29.441 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 64248 00:06:29.441 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:29.441 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:29.441 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64248 00:06:29.441 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:29.441 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:29.441 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64248' 00:06:29.441 killing process with pid 64248 00:06:29.441 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 64248 00:06:29.441 15:14:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 64248 00:06:31.342 00:06:31.342 real 0m3.756s 00:06:31.342 user 0m9.945s 00:06:31.342 sys 0m0.496s 00:06:31.342 15:14:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.342 15:14:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.342 ************************************ 00:06:31.342 END TEST locking_overlapped_coremask 00:06:31.342 ************************************ 00:06:31.342 15:14:44 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:31.342 15:14:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:31.342 15:14:44 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.342 15:14:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.342 15:14:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.342 ************************************ 00:06:31.342 START TEST locking_overlapped_coremask_via_rpc 00:06:31.342 ************************************ 00:06:31.342 15:14:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:31.342 15:14:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64330 00:06:31.342 15:14:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 64330 /var/tmp/spdk.sock 00:06:31.342 15:14:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:31.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.342 15:14:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64330 ']' 00:06:31.342 15:14:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.342 15:14:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.342 15:14:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.342 15:14:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.342 15:14:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.601 [2024-07-11 15:14:44.972701] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:31.601 [2024-07-11 15:14:44.972853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64330 ] 00:06:31.601 [2024-07-11 15:14:45.134509] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.601 [2024-07-11 15:14:45.134578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.860 [2024-07-11 15:14:45.309242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.860 [2024-07-11 15:14:45.309356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.860 [2024-07-11 15:14:45.309385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.428 15:14:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.428 15:14:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:32.428 15:14:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:32.428 15:14:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64348 00:06:32.428 15:14:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 64348 /var/tmp/spdk2.sock 00:06:32.428 15:14:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64348 ']' 00:06:32.428 15:14:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.428 15:14:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.428 15:14:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.428 15:14:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.428 15:14:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.428 [2024-07-11 15:14:46.032421] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:32.428 [2024-07-11 15:14:46.032859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64348 ] 00:06:32.686 [2024-07-11 15:14:46.201863] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.686 [2024-07-11 15:14:46.201992] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.945 [2024-07-11 15:14:46.545116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.945 [2024-07-11 15:14:46.545216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.945 [2024-07-11 15:14:46.545241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:34.322 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.322 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:34.322 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:34.322 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.322 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.322 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.322 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.322 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:34.322 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.322 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:34.322 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.322 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:34.322 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.322 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.322 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.322 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.323 [2024-07-11 15:14:47.855374] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64330 has claimed it. 00:06:34.323 request: 00:06:34.323 { 00:06:34.323 "method": "framework_enable_cpumask_locks", 00:06:34.323 "req_id": 1 00:06:34.323 } 00:06:34.323 Got JSON-RPC error response 00:06:34.323 response: 00:06:34.323 { 00:06:34.323 "code": -32603, 00:06:34.323 "message": "Failed to claim CPU core: 2" 00:06:34.323 } 00:06:34.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.323 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:34.323 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:34.323 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.323 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:34.323 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.323 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 64330 /var/tmp/spdk.sock 00:06:34.323 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64330 ']' 00:06:34.323 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.323 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.323 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.323 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.323 15:14:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.582 15:14:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.582 15:14:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:34.582 15:14:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 64348 /var/tmp/spdk2.sock 00:06:34.582 15:14:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 64348 ']' 00:06:34.582 15:14:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.582 15:14:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.582 15:14:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.582 15:14:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.582 15:14:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.842 15:14:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.842 15:14:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:34.842 15:14:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:34.842 15:14:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:34.842 15:14:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:34.842 15:14:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:34.842 ************************************ 00:06:34.842 END TEST locking_overlapped_coremask_via_rpc 00:06:34.842 ************************************ 00:06:34.842 00:06:34.842 real 0m3.543s 00:06:34.842 user 0m1.378s 00:06:34.842 sys 0m0.180s 00:06:34.842 15:14:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.842 15:14:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.842 15:14:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:34.842 15:14:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:34.842 15:14:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64330 ]] 00:06:34.842 15:14:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64330 00:06:35.102 15:14:48 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64330 ']' 00:06:35.102 15:14:48 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64330 00:06:35.102 15:14:48 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:35.102 15:14:48 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.102 15:14:48 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64330 00:06:35.102 killing process with pid 64330 00:06:35.102 15:14:48 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.102 15:14:48 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.102 15:14:48 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64330' 00:06:35.102 15:14:48 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64330 00:06:35.102 15:14:48 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64330 00:06:37.007 15:14:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64348 ]] 00:06:37.007 15:14:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64348 00:06:37.007 15:14:50 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64348 ']' 00:06:37.007 15:14:50 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64348 00:06:37.007 15:14:50 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:37.007 15:14:50 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.007 15:14:50 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64348 00:06:37.007 killing process with pid 64348 00:06:37.007 15:14:50 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:37.008 15:14:50 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:37.008 15:14:50 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64348' 00:06:37.008 15:14:50 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 64348 00:06:37.008 15:14:50 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 64348 00:06:38.944 15:14:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:38.944 Process with pid 64330 is not found 00:06:38.944 Process with pid 64348 is not found 00:06:38.944 15:14:52 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:38.944 15:14:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64330 ]] 00:06:38.944 15:14:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64330 00:06:38.944 15:14:52 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64330 ']' 00:06:38.944 15:14:52 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64330 00:06:38.944 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64330) - No such process 00:06:38.944 15:14:52 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64330 is not found' 00:06:38.944 15:14:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64348 ]] 00:06:38.944 15:14:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64348 00:06:38.944 15:14:52 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 64348 ']' 00:06:38.944 15:14:52 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 64348 00:06:38.944 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (64348) - No such process 00:06:38.944 15:14:52 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 64348 is not found' 00:06:38.944 15:14:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:38.944 ************************************ 00:06:38.944 END TEST cpu_locks 00:06:38.944 ************************************ 00:06:38.944 00:06:38.944 real 0m40.546s 00:06:38.944 user 1m9.274s 00:06:38.944 sys 0m5.596s 00:06:38.944 15:14:52 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.944 15:14:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.944 15:14:52 event -- common/autotest_common.sh@1142 -- # return 0 00:06:38.944 ************************************ 00:06:38.944 END TEST event 00:06:38.944 ************************************ 00:06:38.944 00:06:38.944 real 1m11.253s 00:06:38.944 user 2m8.952s 00:06:38.944 sys 0m9.079s 00:06:38.944 15:14:52 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.944 15:14:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.944 15:14:52 -- common/autotest_common.sh@1142 -- # return 0 00:06:38.944 15:14:52 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:38.944 15:14:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.944 15:14:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.944 15:14:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.944 ************************************ 00:06:38.944 START TEST thread 00:06:38.944 ************************************ 00:06:38.944 15:14:52 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:38.944 * Looking for test storage... 00:06:38.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:38.944 15:14:52 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:38.944 15:14:52 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:38.944 15:14:52 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.944 15:14:52 thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.944 ************************************ 00:06:38.944 START TEST thread_poller_perf 00:06:38.944 ************************************ 00:06:38.944 15:14:52 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:39.203 [2024-07-11 15:14:52.570053] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:39.203 [2024-07-11 15:14:52.570240] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64511 ] 00:06:39.203 [2024-07-11 15:14:52.743118] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.462 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:39.462 [2024-07-11 15:14:52.955598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.861 ====================================== 00:06:40.861 busy:2209101218 (cyc) 00:06:40.861 total_run_count: 354000 00:06:40.861 tsc_hz: 2200000000 (cyc) 00:06:40.861 ====================================== 00:06:40.861 poller_cost: 6240 (cyc), 2836 (nsec) 00:06:40.861 00:06:40.861 real 0m1.774s 00:06:40.861 user 0m1.564s 00:06:40.861 sys 0m0.100s 00:06:40.861 15:14:54 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.861 ************************************ 00:06:40.861 END TEST thread_poller_perf 00:06:40.861 ************************************ 00:06:40.861 15:14:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:40.861 15:14:54 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:40.861 15:14:54 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:40.861 15:14:54 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:40.861 15:14:54 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.861 15:14:54 thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.861 ************************************ 00:06:40.861 START TEST thread_poller_perf 00:06:40.861 ************************************ 00:06:40.861 15:14:54 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:40.861 [2024-07-11 15:14:54.398437] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:40.861 [2024-07-11 15:14:54.398642] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64553 ] 00:06:41.120 [2024-07-11 15:14:54.568908] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.120 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:41.120 [2024-07-11 15:14:54.724237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.498 ====================================== 00:06:42.498 busy:2203672095 (cyc) 00:06:42.498 total_run_count: 4404000 00:06:42.498 tsc_hz: 2200000000 (cyc) 00:06:42.498 ====================================== 00:06:42.498 poller_cost: 500 (cyc), 227 (nsec) 00:06:42.498 00:06:42.498 real 0m1.721s 00:06:42.498 user 0m1.506s 00:06:42.498 sys 0m0.106s 00:06:42.498 15:14:56 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.498 ************************************ 00:06:42.498 END TEST thread_poller_perf 00:06:42.498 ************************************ 00:06:42.498 15:14:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:42.757 15:14:56 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:42.757 15:14:56 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:42.757 ************************************ 00:06:42.757 END TEST thread 00:06:42.757 ************************************ 00:06:42.757 00:06:42.757 real 0m3.677s 00:06:42.757 user 0m3.128s 00:06:42.757 sys 0m0.320s 00:06:42.757 15:14:56 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.757 15:14:56 thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.757 15:14:56 -- common/autotest_common.sh@1142 -- # return 0 00:06:42.757 15:14:56 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:42.757 15:14:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.757 15:14:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.757 15:14:56 -- common/autotest_common.sh@10 -- # set +x 00:06:42.757 ************************************ 00:06:42.757 START TEST accel 00:06:42.757 ************************************ 00:06:42.757 15:14:56 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:42.757 * Looking for test storage... 00:06:42.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:42.757 15:14:56 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:42.757 15:14:56 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:42.757 15:14:56 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:42.757 15:14:56 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=64629 00:06:42.757 15:14:56 accel -- accel/accel.sh@63 -- # waitforlisten 64629 00:06:42.757 15:14:56 accel -- common/autotest_common.sh@829 -- # '[' -z 64629 ']' 00:06:42.757 15:14:56 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.757 15:14:56 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:42.757 15:14:56 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:42.757 15:14:56 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.757 15:14:56 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.757 15:14:56 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.757 15:14:56 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.757 15:14:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.757 15:14:56 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.757 15:14:56 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.757 15:14:56 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.757 15:14:56 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.757 15:14:56 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:42.757 15:14:56 accel -- accel/accel.sh@41 -- # jq -r . 00:06:42.757 [2024-07-11 15:14:56.361451] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:42.757 [2024-07-11 15:14:56.361921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64629 ] 00:06:43.016 [2024-07-11 15:14:56.530012] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.276 [2024-07-11 15:14:56.695405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.844 15:14:57 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.844 15:14:57 accel -- common/autotest_common.sh@862 -- # return 0 00:06:43.844 15:14:57 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:43.844 15:14:57 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:43.844 15:14:57 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:43.844 15:14:57 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:43.844 15:14:57 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:43.844 15:14:57 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:43.844 15:14:57 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:43.844 15:14:57 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.844 15:14:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.844 15:14:57 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.844 15:14:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.844 15:14:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.844 15:14:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.844 15:14:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.844 15:14:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.844 15:14:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.844 15:14:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.844 15:14:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.844 15:14:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.844 15:14:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.844 15:14:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.844 15:14:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.844 15:14:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.844 15:14:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.844 15:14:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.844 15:14:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.844 15:14:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.844 15:14:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.844 15:14:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.844 15:14:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.844 15:14:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.844 15:14:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.844 15:14:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.844 15:14:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.844 15:14:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.844 15:14:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.844 15:14:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.844 15:14:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.844 15:14:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # IFS== 00:06:43.844 15:14:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:43.844 15:14:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:43.844 15:14:57 accel -- accel/accel.sh@75 -- # killprocess 64629 00:06:43.844 15:14:57 accel -- common/autotest_common.sh@948 -- # '[' -z 64629 ']' 00:06:43.844 15:14:57 accel -- common/autotest_common.sh@952 -- # kill -0 64629 00:06:43.844 15:14:57 accel -- common/autotest_common.sh@953 -- # uname 00:06:43.844 15:14:57 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.844 15:14:57 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64629 00:06:43.844 killing process with pid 64629 00:06:43.844 15:14:57 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.844 15:14:57 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.844 15:14:57 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64629' 00:06:43.844 15:14:57 accel -- common/autotest_common.sh@967 -- # kill 64629 00:06:43.844 15:14:57 accel -- common/autotest_common.sh@972 -- # wait 64629 00:06:45.751 15:14:59 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:45.751 15:14:59 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:45.751 15:14:59 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:45.751 15:14:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.751 15:14:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.751 15:14:59 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:45.751 15:14:59 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:45.751 15:14:59 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:45.751 15:14:59 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.751 15:14:59 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.751 15:14:59 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.751 15:14:59 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.751 15:14:59 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.751 15:14:59 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:45.751 15:14:59 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:45.751 15:14:59 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.751 15:14:59 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:45.751 15:14:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.751 15:14:59 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:45.751 15:14:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:45.751 15:14:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.751 15:14:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.010 ************************************ 00:06:46.010 START TEST accel_missing_filename 00:06:46.010 ************************************ 00:06:46.010 15:14:59 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:46.010 15:14:59 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:46.010 15:14:59 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:46.010 15:14:59 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:46.010 15:14:59 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.010 15:14:59 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:46.010 15:14:59 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.010 15:14:59 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:46.010 15:14:59 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:46.010 15:14:59 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:46.010 15:14:59 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.010 15:14:59 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.010 15:14:59 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.010 15:14:59 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.010 15:14:59 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.010 15:14:59 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:46.010 15:14:59 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:46.010 [2024-07-11 15:14:59.429553] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:46.010 [2024-07-11 15:14:59.429749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64699 ] 00:06:46.010 [2024-07-11 15:14:59.602316] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.269 [2024-07-11 15:14:59.768472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.528 [2024-07-11 15:14:59.937117] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.787 [2024-07-11 15:15:00.364996] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:47.355 A filename is required. 00:06:47.355 15:15:00 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:47.355 15:15:00 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:47.355 15:15:00 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:47.355 15:15:00 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:47.355 15:15:00 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:47.355 15:15:00 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:47.355 00:06:47.355 real 0m1.362s 00:06:47.355 user 0m1.142s 00:06:47.355 sys 0m0.155s 00:06:47.356 15:15:00 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.356 15:15:00 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:47.356 ************************************ 00:06:47.356 END TEST accel_missing_filename 00:06:47.356 ************************************ 00:06:47.356 15:15:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.356 15:15:00 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:47.356 15:15:00 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:47.356 15:15:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.356 15:15:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.356 ************************************ 00:06:47.356 START TEST accel_compress_verify 00:06:47.356 ************************************ 00:06:47.356 15:15:00 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:47.356 15:15:00 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:47.356 15:15:00 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:47.356 15:15:00 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:47.356 15:15:00 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.356 15:15:00 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:47.356 15:15:00 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.356 15:15:00 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:47.356 15:15:00 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:47.356 15:15:00 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:47.356 15:15:00 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.356 15:15:00 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.356 15:15:00 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.356 15:15:00 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.356 15:15:00 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.356 15:15:00 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:47.356 15:15:00 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:47.356 [2024-07-11 15:15:00.842438] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:47.356 [2024-07-11 15:15:00.842615] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64735 ] 00:06:47.615 [2024-07-11 15:15:01.010639] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.615 [2024-07-11 15:15:01.194500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.874 [2024-07-11 15:15:01.377531] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.441 [2024-07-11 15:15:01.821398] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:48.700 00:06:48.700 Compression does not support the verify option, aborting. 00:06:48.700 15:15:02 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:48.700 15:15:02 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.700 15:15:02 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:48.700 15:15:02 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:48.700 15:15:02 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:48.700 15:15:02 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.700 00:06:48.700 real 0m1.420s 00:06:48.700 user 0m1.194s 00:06:48.700 sys 0m0.158s 00:06:48.700 15:15:02 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.700 ************************************ 00:06:48.700 END TEST accel_compress_verify 00:06:48.700 ************************************ 00:06:48.700 15:15:02 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:48.700 15:15:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.700 15:15:02 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:48.700 15:15:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:48.700 15:15:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.700 15:15:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.700 ************************************ 00:06:48.700 START TEST accel_wrong_workload 00:06:48.700 ************************************ 00:06:48.700 15:15:02 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:48.700 15:15:02 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:48.700 15:15:02 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:48.700 15:15:02 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:48.700 15:15:02 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.700 15:15:02 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:48.700 15:15:02 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.700 15:15:02 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:48.700 15:15:02 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:48.700 15:15:02 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:48.700 15:15:02 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.700 15:15:02 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.700 15:15:02 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.700 15:15:02 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.700 15:15:02 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.700 15:15:02 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:48.700 15:15:02 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:48.700 Unsupported workload type: foobar 00:06:48.700 [2024-07-11 15:15:02.305577] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:48.959 accel_perf options: 00:06:48.959 [-h help message] 00:06:48.959 [-q queue depth per core] 00:06:48.959 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:48.959 [-T number of threads per core 00:06:48.959 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:48.959 [-t time in seconds] 00:06:48.959 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:48.959 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:48.959 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:48.959 [-l for compress/decompress workloads, name of uncompressed input file 00:06:48.959 [-S for crc32c workload, use this seed value (default 0) 00:06:48.959 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:48.959 [-f for fill workload, use this BYTE value (default 255) 00:06:48.959 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:48.959 [-y verify result if this switch is on] 00:06:48.959 [-a tasks to allocate per core (default: same value as -q)] 00:06:48.959 Can be used to spread operations across a wider range of memory. 00:06:48.959 ************************************ 00:06:48.959 END TEST accel_wrong_workload 00:06:48.959 ************************************ 00:06:48.959 15:15:02 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:48.959 15:15:02 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.959 15:15:02 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:48.959 15:15:02 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.959 00:06:48.959 real 0m0.077s 00:06:48.959 user 0m0.085s 00:06:48.959 sys 0m0.041s 00:06:48.959 15:15:02 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.959 15:15:02 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:48.959 15:15:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.959 15:15:02 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:48.959 15:15:02 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:48.959 15:15:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.959 15:15:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.959 ************************************ 00:06:48.959 START TEST accel_negative_buffers 00:06:48.959 ************************************ 00:06:48.959 15:15:02 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:48.959 15:15:02 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:48.959 15:15:02 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:48.959 15:15:02 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:48.959 15:15:02 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.959 15:15:02 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:48.959 15:15:02 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.959 15:15:02 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:48.959 15:15:02 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:48.959 15:15:02 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:48.959 15:15:02 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.959 15:15:02 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.959 15:15:02 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.959 15:15:02 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.959 15:15:02 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.959 15:15:02 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:48.959 15:15:02 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:48.959 -x option must be non-negative. 00:06:48.959 [2024-07-11 15:15:02.426980] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:48.959 accel_perf options: 00:06:48.959 [-h help message] 00:06:48.959 [-q queue depth per core] 00:06:48.959 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:48.959 [-T number of threads per core 00:06:48.959 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:48.959 [-t time in seconds] 00:06:48.959 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:48.959 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:48.959 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:48.959 [-l for compress/decompress workloads, name of uncompressed input file 00:06:48.959 [-S for crc32c workload, use this seed value (default 0) 00:06:48.959 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:48.959 [-f for fill workload, use this BYTE value (default 255) 00:06:48.959 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:48.959 [-y verify result if this switch is on] 00:06:48.959 [-a tasks to allocate per core (default: same value as -q)] 00:06:48.959 Can be used to spread operations across a wider range of memory. 00:06:48.959 ************************************ 00:06:48.959 END TEST accel_negative_buffers 00:06:48.959 ************************************ 00:06:48.959 15:15:02 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:48.959 15:15:02 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.959 15:15:02 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:48.960 15:15:02 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.960 00:06:48.960 real 0m0.062s 00:06:48.960 user 0m0.075s 00:06:48.960 sys 0m0.034s 00:06:48.960 15:15:02 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.960 15:15:02 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:48.960 15:15:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.960 15:15:02 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:48.960 15:15:02 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:48.960 15:15:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.960 15:15:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.960 ************************************ 00:06:48.960 START TEST accel_crc32c 00:06:48.960 ************************************ 00:06:48.960 15:15:02 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:48.960 15:15:02 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:48.960 15:15:02 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:48.960 15:15:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:48.960 15:15:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:48.960 15:15:02 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:48.960 15:15:02 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:48.960 15:15:02 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:48.960 15:15:02 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.960 15:15:02 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.960 15:15:02 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.960 15:15:02 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.960 15:15:02 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.960 15:15:02 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:48.960 15:15:02 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:48.960 [2024-07-11 15:15:02.539049] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:48.960 [2024-07-11 15:15:02.539216] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64808 ] 00:06:49.218 [2024-07-11 15:15:02.711475] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.477 [2024-07-11 15:15:02.889272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.477 15:15:03 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:51.378 15:15:04 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.378 00:06:51.378 real 0m2.347s 00:06:51.378 user 0m2.125s 00:06:51.378 sys 0m0.131s 00:06:51.378 ************************************ 00:06:51.378 END TEST accel_crc32c 00:06:51.378 ************************************ 00:06:51.378 15:15:04 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.378 15:15:04 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:51.378 15:15:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.378 15:15:04 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:51.378 15:15:04 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:51.378 15:15:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.378 15:15:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.378 ************************************ 00:06:51.378 START TEST accel_crc32c_C2 00:06:51.378 ************************************ 00:06:51.378 15:15:04 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:51.378 15:15:04 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.378 15:15:04 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:51.378 15:15:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.378 15:15:04 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.378 15:15:04 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:51.378 15:15:04 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:51.378 15:15:04 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.378 15:15:04 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.378 15:15:04 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.378 15:15:04 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.378 15:15:04 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.378 15:15:04 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.378 15:15:04 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:51.378 15:15:04 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:51.378 [2024-07-11 15:15:04.935794] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:51.378 [2024-07-11 15:15:04.935972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64854 ] 00:06:51.636 [2024-07-11 15:15:05.109088] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.918 [2024-07-11 15:15:05.281091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.918 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.918 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.918 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.918 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.918 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.919 15:15:05 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.839 00:06:53.839 real 0m2.319s 00:06:53.839 user 0m2.069s 00:06:53.839 sys 0m0.152s 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.839 15:15:07 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:53.839 ************************************ 00:06:53.839 END TEST accel_crc32c_C2 00:06:53.839 ************************************ 00:06:53.839 15:15:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:53.839 15:15:07 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:53.839 15:15:07 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:53.839 15:15:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.839 15:15:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.839 ************************************ 00:06:53.839 START TEST accel_copy 00:06:53.839 ************************************ 00:06:53.839 15:15:07 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:53.839 15:15:07 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:53.839 15:15:07 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:53.839 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.839 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.839 15:15:07 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:53.839 15:15:07 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:53.839 15:15:07 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:53.839 15:15:07 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.839 15:15:07 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.839 15:15:07 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.839 15:15:07 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.839 15:15:07 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.839 15:15:07 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:53.839 15:15:07 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:53.839 [2024-07-11 15:15:07.310149] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:53.839 [2024-07-11 15:15:07.310333] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64895 ] 00:06:54.098 [2024-07-11 15:15:07.477428] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.098 [2024-07-11 15:15:07.624488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.357 15:15:07 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.262 15:15:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.262 15:15:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.262 15:15:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.262 15:15:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.262 15:15:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.262 15:15:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.262 15:15:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.263 15:15:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.263 15:15:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.263 15:15:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.263 15:15:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.263 15:15:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.263 15:15:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.263 15:15:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.263 15:15:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.263 15:15:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.263 15:15:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.263 15:15:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.263 15:15:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.263 15:15:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.263 15:15:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:56.263 15:15:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:56.263 15:15:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:56.263 15:15:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:56.263 15:15:09 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.263 15:15:09 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:56.263 15:15:09 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.263 00:06:56.263 real 0m2.234s 00:06:56.263 user 0m1.997s 00:06:56.263 sys 0m0.143s 00:06:56.263 15:15:09 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.263 15:15:09 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:56.263 ************************************ 00:06:56.263 END TEST accel_copy 00:06:56.263 ************************************ 00:06:56.263 15:15:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.263 15:15:09 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:56.263 15:15:09 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:56.263 15:15:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.263 15:15:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.263 ************************************ 00:06:56.263 START TEST accel_fill 00:06:56.263 ************************************ 00:06:56.263 15:15:09 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:56.263 15:15:09 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:56.263 15:15:09 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:56.263 15:15:09 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.263 15:15:09 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.263 15:15:09 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:56.263 15:15:09 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:56.263 15:15:09 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:56.263 15:15:09 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.263 15:15:09 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.263 15:15:09 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.263 15:15:09 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.263 15:15:09 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.263 15:15:09 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:56.263 15:15:09 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:56.263 [2024-07-11 15:15:09.590441] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:56.263 [2024-07-11 15:15:09.591043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64944 ] 00:06:56.263 [2024-07-11 15:15:09.760002] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.522 [2024-07-11 15:15:09.916183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.522 15:15:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:56.523 15:15:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.523 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.523 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.523 15:15:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:56.523 15:15:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.523 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.523 15:15:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:58.427 15:15:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.427 ************************************ 00:06:58.427 END TEST accel_fill 00:06:58.427 ************************************ 00:06:58.427 00:06:58.427 real 0m2.262s 00:06:58.427 user 0m2.042s 00:06:58.427 sys 0m0.129s 00:06:58.427 15:15:11 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.427 15:15:11 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:58.427 15:15:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.427 15:15:11 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:58.427 15:15:11 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:58.427 15:15:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.427 15:15:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.427 ************************************ 00:06:58.427 START TEST accel_copy_crc32c 00:06:58.427 ************************************ 00:06:58.427 15:15:11 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:58.427 15:15:11 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:58.427 15:15:11 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:58.427 15:15:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.427 15:15:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.427 15:15:11 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:58.427 15:15:11 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:58.427 15:15:11 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:58.427 15:15:11 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.427 15:15:11 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.427 15:15:11 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.427 15:15:11 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.427 15:15:11 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.427 15:15:11 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:58.427 15:15:11 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:58.427 [2024-07-11 15:15:11.915520] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:58.427 [2024-07-11 15:15:11.915713] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64985 ] 00:06:58.687 [2024-07-11 15:15:12.084328] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.687 [2024-07-11 15:15:12.233585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.945 15:15:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.925 00:07:00.925 real 0m2.265s 00:07:00.925 user 0m2.029s 00:07:00.925 sys 0m0.143s 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.925 ************************************ 00:07:00.925 END TEST accel_copy_crc32c 00:07:00.925 ************************************ 00:07:00.925 15:15:14 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:00.925 15:15:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:00.925 15:15:14 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:00.925 15:15:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:00.925 15:15:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.925 15:15:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.925 ************************************ 00:07:00.925 START TEST accel_copy_crc32c_C2 00:07:00.925 ************************************ 00:07:00.925 15:15:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:00.925 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.925 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:00.925 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.925 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.925 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:00.925 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:00.925 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.925 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.925 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.925 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.925 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.925 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.925 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:00.925 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:00.925 [2024-07-11 15:15:14.235337] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:00.925 [2024-07-11 15:15:14.235497] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65026 ] 00:07:00.925 [2024-07-11 15:15:14.403023] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.184 [2024-07-11 15:15:14.561801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.184 15:15:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.088 00:07:03.088 real 0m2.290s 00:07:03.088 user 0m2.049s 00:07:03.088 sys 0m0.149s 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.088 ************************************ 00:07:03.088 END TEST accel_copy_crc32c_C2 00:07:03.088 ************************************ 00:07:03.088 15:15:16 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:03.088 15:15:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.088 15:15:16 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:03.088 15:15:16 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:03.088 15:15:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.088 15:15:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.088 ************************************ 00:07:03.088 START TEST accel_dualcast 00:07:03.088 ************************************ 00:07:03.088 15:15:16 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:03.088 15:15:16 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:03.088 15:15:16 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:03.088 15:15:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.088 15:15:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.088 15:15:16 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:03.088 15:15:16 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:03.088 15:15:16 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:03.088 15:15:16 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.088 15:15:16 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.088 15:15:16 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.088 15:15:16 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.088 15:15:16 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.088 15:15:16 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:03.088 15:15:16 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:03.088 [2024-07-11 15:15:16.577905] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:03.088 [2024-07-11 15:15:16.578104] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65077 ] 00:07:03.347 [2024-07-11 15:15:16.749362] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.347 [2024-07-11 15:15:16.908063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:03.606 15:15:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.529 ************************************ 00:07:05.529 END TEST accel_dualcast 00:07:05.529 ************************************ 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:05.529 15:15:18 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.529 00:07:05.529 real 0m2.245s 00:07:05.529 user 0m1.995s 00:07:05.529 sys 0m0.157s 00:07:05.529 15:15:18 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.529 15:15:18 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:05.529 15:15:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.529 15:15:18 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:05.529 15:15:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:05.529 15:15:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.529 15:15:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.529 ************************************ 00:07:05.529 START TEST accel_compare 00:07:05.529 ************************************ 00:07:05.529 15:15:18 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:05.529 15:15:18 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:05.529 15:15:18 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:05.529 15:15:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.529 15:15:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.529 15:15:18 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:05.529 15:15:18 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:05.529 15:15:18 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:05.529 15:15:18 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.529 15:15:18 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.530 15:15:18 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.530 15:15:18 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.530 15:15:18 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.530 15:15:18 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:05.530 15:15:18 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:05.530 [2024-07-11 15:15:18.879501] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:05.530 [2024-07-11 15:15:18.879674] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65119 ] 00:07:05.530 [2024-07-11 15:15:19.049415] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.789 [2024-07-11 15:15:19.203782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.789 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.790 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.790 15:15:19 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:05.790 15:15:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.790 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.790 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.790 15:15:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.790 15:15:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.790 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.790 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:05.790 15:15:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:05.790 15:15:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:05.790 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:05.790 15:15:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:07.693 15:15:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.693 00:07:07.693 real 0m2.256s 00:07:07.693 user 0m2.009s 00:07:07.693 sys 0m0.154s 00:07:07.693 15:15:21 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.693 15:15:21 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:07.693 ************************************ 00:07:07.693 END TEST accel_compare 00:07:07.693 ************************************ 00:07:07.693 15:15:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.693 15:15:21 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:07.693 15:15:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:07.693 15:15:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.693 15:15:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.693 ************************************ 00:07:07.693 START TEST accel_xor 00:07:07.693 ************************************ 00:07:07.693 15:15:21 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:07.693 15:15:21 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:07.693 15:15:21 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:07.693 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:07.693 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:07.693 15:15:21 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:07.694 15:15:21 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:07.694 15:15:21 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:07.694 15:15:21 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.694 15:15:21 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.694 15:15:21 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.694 15:15:21 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.694 15:15:21 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.694 15:15:21 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:07.694 15:15:21 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:07.694 [2024-07-11 15:15:21.184311] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:07.694 [2024-07-11 15:15:21.184479] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65160 ] 00:07:07.953 [2024-07-11 15:15:21.352135] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.953 [2024-07-11 15:15:21.505133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.212 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.213 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.213 15:15:21 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:08.213 15:15:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.213 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.213 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.213 15:15:21 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.213 15:15:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.213 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.213 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.213 15:15:21 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:08.213 15:15:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.213 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.213 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.213 15:15:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.213 15:15:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.213 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.213 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.213 15:15:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.213 15:15:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.213 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.213 15:15:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.129 00:07:10.129 real 0m2.254s 00:07:10.129 user 0m2.023s 00:07:10.129 sys 0m0.139s 00:07:10.129 15:15:23 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.129 ************************************ 00:07:10.129 END TEST accel_xor 00:07:10.129 ************************************ 00:07:10.129 15:15:23 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:10.129 15:15:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.129 15:15:23 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:10.129 15:15:23 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:10.129 15:15:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.129 15:15:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.129 ************************************ 00:07:10.129 START TEST accel_xor 00:07:10.129 ************************************ 00:07:10.129 15:15:23 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:10.129 15:15:23 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:10.129 [2024-07-11 15:15:23.483445] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:10.129 [2024-07-11 15:15:23.483583] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65201 ] 00:07:10.129 [2024-07-11 15:15:23.635767] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.389 [2024-07-11 15:15:23.797307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.389 15:15:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:12.294 15:15:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.294 00:07:12.294 real 0m2.243s 00:07:12.294 user 0m2.020s 00:07:12.294 sys 0m0.130s 00:07:12.294 ************************************ 00:07:12.294 END TEST accel_xor 00:07:12.294 ************************************ 00:07:12.294 15:15:25 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.294 15:15:25 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:12.294 15:15:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:12.294 15:15:25 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:12.294 15:15:25 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:12.294 15:15:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.294 15:15:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.294 ************************************ 00:07:12.294 START TEST accel_dif_verify 00:07:12.294 ************************************ 00:07:12.294 15:15:25 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:12.294 15:15:25 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:12.294 15:15:25 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:12.294 15:15:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.294 15:15:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.294 15:15:25 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:12.294 15:15:25 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:12.294 15:15:25 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:12.295 15:15:25 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.295 15:15:25 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.295 15:15:25 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.295 15:15:25 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.295 15:15:25 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.295 15:15:25 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:12.295 15:15:25 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:12.295 [2024-07-11 15:15:25.784476] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:12.295 [2024-07-11 15:15:25.784650] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65248 ] 00:07:12.554 [2024-07-11 15:15:25.960500] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.554 [2024-07-11 15:15:26.170371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.814 15:15:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.719 15:15:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.719 15:15:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.719 15:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.719 15:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.719 15:15:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.719 15:15:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.720 15:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.720 15:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.720 15:15:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.720 15:15:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.720 15:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.720 15:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.720 15:15:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.720 15:15:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.720 15:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.720 15:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.720 15:15:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.720 15:15:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.720 15:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.720 15:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.720 15:15:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.720 15:15:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.720 15:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.720 15:15:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.720 15:15:28 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.720 15:15:28 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:14.720 15:15:28 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.720 00:07:14.720 real 0m2.316s 00:07:14.720 user 0m2.068s 00:07:14.720 sys 0m0.156s 00:07:14.720 15:15:28 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.720 15:15:28 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:14.720 ************************************ 00:07:14.720 END TEST accel_dif_verify 00:07:14.720 ************************************ 00:07:14.720 15:15:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.720 15:15:28 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:14.720 15:15:28 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:14.720 15:15:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.720 15:15:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.720 ************************************ 00:07:14.720 START TEST accel_dif_generate 00:07:14.720 ************************************ 00:07:14.720 15:15:28 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:14.720 15:15:28 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:14.720 15:15:28 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:14.720 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.720 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.720 15:15:28 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:14.720 15:15:28 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:14.720 15:15:28 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:14.720 15:15:28 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.720 15:15:28 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.720 15:15:28 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.720 15:15:28 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.720 15:15:28 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.720 15:15:28 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:14.720 15:15:28 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:14.720 [2024-07-11 15:15:28.150167] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:14.720 [2024-07-11 15:15:28.150335] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65289 ] 00:07:14.720 [2024-07-11 15:15:28.311852] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.978 [2024-07-11 15:15:28.472604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.238 15:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.239 15:15:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:17.143 15:15:30 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.143 00:07:17.143 real 0m2.258s 00:07:17.143 user 0m2.025s 00:07:17.143 sys 0m0.139s 00:07:17.143 15:15:30 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.143 15:15:30 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:17.143 ************************************ 00:07:17.143 END TEST accel_dif_generate 00:07:17.143 ************************************ 00:07:17.143 15:15:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.143 15:15:30 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:17.143 15:15:30 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:17.143 15:15:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.143 15:15:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.143 ************************************ 00:07:17.143 START TEST accel_dif_generate_copy 00:07:17.143 ************************************ 00:07:17.143 15:15:30 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:17.143 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:17.143 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:17.143 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.143 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.143 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:17.143 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:17.143 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:17.143 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.143 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.143 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.143 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.143 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.143 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:17.143 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:17.143 [2024-07-11 15:15:30.471827] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:17.143 [2024-07-11 15:15:30.472000] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65335 ] 00:07:17.143 [2024-07-11 15:15:30.646343] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.403 [2024-07-11 15:15:30.799584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.403 15:15:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.307 00:07:19.307 real 0m2.266s 00:07:19.307 user 0m2.034s 00:07:19.307 sys 0m0.137s 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.307 15:15:32 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:19.307 ************************************ 00:07:19.307 END TEST accel_dif_generate_copy 00:07:19.307 ************************************ 00:07:19.307 15:15:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.307 15:15:32 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:19.307 15:15:32 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.307 15:15:32 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:19.307 15:15:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.307 15:15:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.307 ************************************ 00:07:19.307 START TEST accel_comp 00:07:19.307 ************************************ 00:07:19.307 15:15:32 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.307 15:15:32 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:19.307 15:15:32 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:19.307 15:15:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.307 15:15:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.307 15:15:32 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.307 15:15:32 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.307 15:15:32 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:19.307 15:15:32 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.307 15:15:32 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.307 15:15:32 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.307 15:15:32 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.307 15:15:32 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.307 15:15:32 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:19.307 15:15:32 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:19.307 [2024-07-11 15:15:32.792191] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:19.308 [2024-07-11 15:15:32.792376] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65376 ] 00:07:19.573 [2024-07-11 15:15:32.965339] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.573 [2024-07-11 15:15:33.113207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 15:15:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:21.745 15:15:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:21.745 15:15:35 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.745 15:15:35 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:21.746 15:15:35 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.746 00:07:21.746 real 0m2.271s 00:07:21.746 user 0m0.016s 00:07:21.746 sys 0m0.005s 00:07:21.746 ************************************ 00:07:21.746 END TEST accel_comp 00:07:21.746 15:15:35 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.746 15:15:35 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:21.746 ************************************ 00:07:21.746 15:15:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.746 15:15:35 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:21.746 15:15:35 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:21.746 15:15:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.746 15:15:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.746 ************************************ 00:07:21.746 START TEST accel_decomp 00:07:21.746 ************************************ 00:07:21.746 15:15:35 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:21.746 15:15:35 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:21.746 15:15:35 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:21.746 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.746 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.746 15:15:35 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:21.746 15:15:35 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:21.746 15:15:35 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:21.746 15:15:35 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.746 15:15:35 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.746 15:15:35 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.746 15:15:35 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.746 15:15:35 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.746 15:15:35 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:21.746 15:15:35 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:21.746 [2024-07-11 15:15:35.120520] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:21.746 [2024-07-11 15:15:35.120702] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65423 ] 00:07:21.746 [2024-07-11 15:15:35.291501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.005 [2024-07-11 15:15:35.449245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.005 15:15:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:22.006 15:15:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.921 15:15:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.921 15:15:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.921 15:15:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.921 15:15:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.921 15:15:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.921 15:15:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.921 15:15:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.921 15:15:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.921 15:15:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.922 15:15:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.922 15:15:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.922 15:15:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.922 15:15:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.922 15:15:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.922 15:15:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.922 15:15:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.922 15:15:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.922 15:15:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.922 15:15:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.922 15:15:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.922 15:15:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:23.922 15:15:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:23.922 15:15:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:23.922 15:15:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:23.922 15:15:37 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.922 15:15:37 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:23.922 ************************************ 00:07:23.922 END TEST accel_decomp 00:07:23.922 ************************************ 00:07:23.922 15:15:37 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.922 00:07:23.922 real 0m2.272s 00:07:23.922 user 0m2.029s 00:07:23.922 sys 0m0.151s 00:07:23.922 15:15:37 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.922 15:15:37 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:23.922 15:15:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.922 15:15:37 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:23.922 15:15:37 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:23.922 15:15:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.922 15:15:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.922 ************************************ 00:07:23.922 START TEST accel_decomp_full 00:07:23.922 ************************************ 00:07:23.922 15:15:37 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:23.922 15:15:37 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:23.922 15:15:37 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:23.922 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.922 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.922 15:15:37 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:23.922 15:15:37 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:23.922 15:15:37 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:23.922 15:15:37 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.922 15:15:37 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.922 15:15:37 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.922 15:15:37 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.922 15:15:37 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.922 15:15:37 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:23.922 15:15:37 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:23.922 [2024-07-11 15:15:37.441283] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:23.922 [2024-07-11 15:15:37.441467] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65464 ] 00:07:24.181 [2024-07-11 15:15:37.612391] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.181 [2024-07-11 15:15:37.765850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:24.440 15:15:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:24.441 15:15:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:24.441 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:24.441 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:24.441 15:15:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:24.441 15:15:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:24.441 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:24.441 15:15:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.346 ************************************ 00:07:26.346 END TEST accel_decomp_full 00:07:26.346 ************************************ 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:26.346 15:15:39 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.346 00:07:26.346 real 0m2.287s 00:07:26.346 user 0m2.039s 00:07:26.346 sys 0m0.154s 00:07:26.346 15:15:39 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.346 15:15:39 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:26.346 15:15:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.346 15:15:39 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:26.346 15:15:39 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:26.346 15:15:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.346 15:15:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.346 ************************************ 00:07:26.346 START TEST accel_decomp_mcore 00:07:26.346 ************************************ 00:07:26.346 15:15:39 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:26.346 15:15:39 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:26.346 15:15:39 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:26.346 15:15:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.346 15:15:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.346 15:15:39 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:26.346 15:15:39 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:26.346 15:15:39 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:26.346 15:15:39 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.346 15:15:39 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.346 15:15:39 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.346 15:15:39 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.346 15:15:39 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.346 15:15:39 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:26.346 15:15:39 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:26.346 [2024-07-11 15:15:39.776072] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:26.346 [2024-07-11 15:15:39.776213] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65505 ] 00:07:26.346 [2024-07-11 15:15:39.930827] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.606 [2024-07-11 15:15:40.100945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.606 [2024-07-11 15:15:40.101093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.606 [2024-07-11 15:15:40.101382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.606 [2024-07-11 15:15:40.101388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.865 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.866 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.866 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:26.866 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.866 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.866 15:15:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.770 ************************************ 00:07:28.770 END TEST accel_decomp_mcore 00:07:28.770 ************************************ 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.770 00:07:28.770 real 0m2.311s 00:07:28.770 user 0m0.018s 00:07:28.770 sys 0m0.005s 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.770 15:15:42 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:28.770 15:15:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.770 15:15:42 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.770 15:15:42 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:28.770 15:15:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.770 15:15:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.770 ************************************ 00:07:28.770 START TEST accel_decomp_full_mcore 00:07:28.770 ************************************ 00:07:28.770 15:15:42 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.770 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:28.770 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:28.770 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:28.770 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:28.770 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.770 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.770 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:28.770 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.770 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.770 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.770 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.770 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.770 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:28.770 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:28.770 [2024-07-11 15:15:42.139824] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:28.770 [2024-07-11 15:15:42.140004] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65560 ] 00:07:28.770 [2024-07-11 15:15:42.301061] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.028 [2024-07-11 15:15:42.455666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.028 [2024-07-11 15:15:42.455766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.028 [2024-07-11 15:15:42.455930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.028 [2024-07-11 15:15:42.455940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:29.028 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.029 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.029 15:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.930 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.931 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.931 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.931 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.931 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.931 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:30.931 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.931 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.931 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.931 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.931 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:30.931 15:15:44 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.931 00:07:30.931 real 0m2.341s 00:07:30.931 user 0m6.983s 00:07:30.931 sys 0m0.156s 00:07:30.931 15:15:44 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.931 15:15:44 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:30.931 ************************************ 00:07:30.931 END TEST accel_decomp_full_mcore 00:07:30.931 ************************************ 00:07:30.931 15:15:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.931 15:15:44 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:30.931 15:15:44 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:30.931 15:15:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.931 15:15:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.931 ************************************ 00:07:30.931 START TEST accel_decomp_mthread 00:07:30.931 ************************************ 00:07:30.931 15:15:44 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:30.931 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:30.931 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:30.931 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:30.931 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:30.931 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:30.931 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:30.931 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:30.931 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.931 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.931 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.931 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.931 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.931 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:30.931 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:30.931 [2024-07-11 15:15:44.533953] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:30.931 [2024-07-11 15:15:44.534160] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65604 ] 00:07:31.190 [2024-07-11 15:15:44.691670] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.449 [2024-07-11 15:15:44.840368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.449 15:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:31.449 15:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.354 00:07:33.354 real 0m2.254s 00:07:33.354 user 0m2.027s 00:07:33.354 sys 0m0.136s 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.354 15:15:46 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:33.354 ************************************ 00:07:33.354 END TEST accel_decomp_mthread 00:07:33.354 ************************************ 00:07:33.354 15:15:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.354 15:15:46 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.354 15:15:46 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:33.354 15:15:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.354 15:15:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.354 ************************************ 00:07:33.354 START TEST accel_decomp_full_mthread 00:07:33.354 ************************************ 00:07:33.354 15:15:46 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.354 15:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:33.354 15:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:33.354 15:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.354 15:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.354 15:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.354 15:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.354 15:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:33.354 15:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.354 15:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.354 15:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.354 15:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.354 15:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.354 15:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:33.354 15:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:33.354 [2024-07-11 15:15:46.834875] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:33.354 [2024-07-11 15:15:46.835063] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65645 ] 00:07:33.612 [2024-07-11 15:15:46.986970] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.612 [2024-07-11 15:15:47.137977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.871 15:15:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.777 00:07:35.777 real 0m2.281s 00:07:35.777 user 0m2.062s 00:07:35.777 sys 0m0.129s 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.777 15:15:49 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:35.777 ************************************ 00:07:35.777 END TEST accel_decomp_full_mthread 00:07:35.777 ************************************ 00:07:35.777 15:15:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.777 15:15:49 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:35.777 15:15:49 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:35.777 15:15:49 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:35.777 15:15:49 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.777 15:15:49 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:35.777 15:15:49 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.777 15:15:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.777 15:15:49 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.777 15:15:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.777 15:15:49 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.777 15:15:49 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.777 15:15:49 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:35.777 15:15:49 accel -- accel/accel.sh@41 -- # jq -r . 00:07:35.777 ************************************ 00:07:35.777 START TEST accel_dif_functional_tests 00:07:35.777 ************************************ 00:07:35.777 15:15:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:35.777 [2024-07-11 15:15:49.232544] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:35.777 [2024-07-11 15:15:49.232725] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65687 ] 00:07:36.036 [2024-07-11 15:15:49.402954] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.036 [2024-07-11 15:15:49.573228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.036 [2024-07-11 15:15:49.573346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.036 [2024-07-11 15:15:49.573372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.295 00:07:36.295 00:07:36.295 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.295 http://cunit.sourceforge.net/ 00:07:36.295 00:07:36.295 00:07:36.295 Suite: accel_dif 00:07:36.296 Test: verify: DIF generated, GUARD check ...passed 00:07:36.296 Test: verify: DIF generated, APPTAG check ...passed 00:07:36.296 Test: verify: DIF generated, REFTAG check ...passed 00:07:36.296 Test: verify: DIF not generated, GUARD check ...passed 00:07:36.296 Test: verify: DIF not generated, APPTAG check ...[2024-07-11 15:15:49.820962] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:36.296 passed 00:07:36.296 Test: verify: DIF not generated, REFTAG check ...passed 00:07:36.296 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:36.296 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:36.296 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:36.296 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-07-11 15:15:49.821140] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:36.296 [2024-07-11 15:15:49.821197] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:36.296 [2024-07-11 15:15:49.821295] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:36.296 passed 00:07:36.296 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:36.296 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:36.296 Test: verify copy: DIF generated, GUARD check ...[2024-07-11 15:15:49.821495] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:36.296 passed 00:07:36.296 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:36.296 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:36.296 Test: verify copy: DIF not generated, GUARD check ...[2024-07-11 15:15:49.821748] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:36.296 passed 00:07:36.296 Test: verify copy: DIF not generated, APPTAG check ...passed 00:07:36.296 Test: verify copy: DIF not generated, REFTAG check ...passed 00:07:36.296 Test: generate copy: DIF generated, GUARD check ...[2024-07-11 15:15:49.821842] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:36.296 [2024-07-11 15:15:49.821906] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:36.296 passed 00:07:36.296 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:36.296 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:36.296 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:36.296 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:36.296 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:36.296 Test: generate copy: iovecs-len validate ...[2024-07-11 15:15:49.822367] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:36.296 passed 00:07:36.296 Test: generate copy: buffer alignment validate ...passed 00:07:36.296 00:07:36.296 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.296 suites 1 1 n/a 0 0 00:07:36.296 tests 26 26 26 0 0 00:07:36.296 asserts 115 115 115 0 n/a 00:07:36.296 00:07:36.296 Elapsed time = 0.005 seconds 00:07:37.284 00:07:37.284 real 0m1.658s 00:07:37.284 user 0m3.070s 00:07:37.284 sys 0m0.203s 00:07:37.284 15:15:50 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.284 15:15:50 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:37.284 ************************************ 00:07:37.284 END TEST accel_dif_functional_tests 00:07:37.284 ************************************ 00:07:37.284 15:15:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.284 00:07:37.284 real 0m54.671s 00:07:37.284 user 0m59.744s 00:07:37.284 sys 0m4.685s 00:07:37.284 15:15:50 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.284 15:15:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.284 ************************************ 00:07:37.284 END TEST accel 00:07:37.284 ************************************ 00:07:37.284 15:15:50 -- common/autotest_common.sh@1142 -- # return 0 00:07:37.284 15:15:50 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:37.284 15:15:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:37.284 15:15:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.284 15:15:50 -- common/autotest_common.sh@10 -- # set +x 00:07:37.543 ************************************ 00:07:37.543 START TEST accel_rpc 00:07:37.543 ************************************ 00:07:37.543 15:15:50 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:37.543 * Looking for test storage... 00:07:37.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:37.543 15:15:50 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:37.543 15:15:50 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=65769 00:07:37.543 15:15:50 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 65769 00:07:37.543 15:15:50 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:37.543 15:15:50 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 65769 ']' 00:07:37.543 15:15:50 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.543 15:15:50 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:37.543 15:15:50 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.543 15:15:50 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:37.543 15:15:50 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.543 [2024-07-11 15:15:51.092047] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:37.543 [2024-07-11 15:15:51.092226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65769 ] 00:07:37.802 [2024-07-11 15:15:51.260222] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.802 [2024-07-11 15:15:51.410510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.737 15:15:51 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.737 15:15:51 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:38.737 15:15:51 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:38.737 15:15:51 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:38.737 15:15:51 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:38.737 15:15:51 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:38.737 15:15:51 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:38.737 15:15:51 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:38.737 15:15:51 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.737 15:15:51 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.737 ************************************ 00:07:38.737 START TEST accel_assign_opcode 00:07:38.737 ************************************ 00:07:38.737 15:15:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:38.737 15:15:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:38.737 15:15:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.737 15:15:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:38.737 [2024-07-11 15:15:52.011357] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:38.737 15:15:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.737 15:15:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:38.737 15:15:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.737 15:15:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:38.737 [2024-07-11 15:15:52.019343] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:38.737 15:15:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.737 15:15:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:38.737 15:15:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.737 15:15:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:38.996 15:15:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.996 15:15:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:38.996 15:15:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:38.996 15:15:52 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:38.996 15:15:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.996 15:15:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:38.996 15:15:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.255 software 00:07:39.255 ************************************ 00:07:39.255 END TEST accel_assign_opcode 00:07:39.255 ************************************ 00:07:39.255 00:07:39.255 real 0m0.632s 00:07:39.255 user 0m0.057s 00:07:39.255 sys 0m0.010s 00:07:39.255 15:15:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.255 15:15:52 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:39.255 15:15:52 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:39.255 15:15:52 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 65769 00:07:39.255 15:15:52 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 65769 ']' 00:07:39.255 15:15:52 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 65769 00:07:39.255 15:15:52 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:39.255 15:15:52 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:39.255 15:15:52 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65769 00:07:39.255 15:15:52 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:39.255 killing process with pid 65769 00:07:39.255 15:15:52 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:39.255 15:15:52 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65769' 00:07:39.255 15:15:52 accel_rpc -- common/autotest_common.sh@967 -- # kill 65769 00:07:39.255 15:15:52 accel_rpc -- common/autotest_common.sh@972 -- # wait 65769 00:07:41.158 00:07:41.158 real 0m3.613s 00:07:41.158 user 0m3.676s 00:07:41.158 sys 0m0.454s 00:07:41.158 15:15:54 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.158 ************************************ 00:07:41.158 END TEST accel_rpc 00:07:41.158 ************************************ 00:07:41.158 15:15:54 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.158 15:15:54 -- common/autotest_common.sh@1142 -- # return 0 00:07:41.158 15:15:54 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:41.158 15:15:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:41.158 15:15:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.158 15:15:54 -- common/autotest_common.sh@10 -- # set +x 00:07:41.158 ************************************ 00:07:41.158 START TEST app_cmdline 00:07:41.158 ************************************ 00:07:41.158 15:15:54 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:41.158 * Looking for test storage... 00:07:41.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:41.158 15:15:54 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:41.158 15:15:54 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=65880 00:07:41.158 15:15:54 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 65880 00:07:41.158 15:15:54 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:41.158 15:15:54 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 65880 ']' 00:07:41.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.158 15:15:54 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.158 15:15:54 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:41.158 15:15:54 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.158 15:15:54 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:41.158 15:15:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:41.158 [2024-07-11 15:15:54.758677] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:41.158 [2024-07-11 15:15:54.758850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65880 ] 00:07:41.417 [2024-07-11 15:15:54.924539] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.675 [2024-07-11 15:15:55.089261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.242 15:15:55 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:42.242 15:15:55 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:42.242 15:15:55 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:42.500 { 00:07:42.500 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:07:42.500 "fields": { 00:07:42.500 "major": 24, 00:07:42.500 "minor": 9, 00:07:42.500 "patch": 0, 00:07:42.500 "suffix": "-pre", 00:07:42.500 "commit": "719d03c6a" 00:07:42.500 } 00:07:42.500 } 00:07:42.500 15:15:55 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:42.500 15:15:55 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:42.500 15:15:55 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:42.500 15:15:55 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:42.500 15:15:55 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:42.500 15:15:55 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:42.500 15:15:55 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.500 15:15:55 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:42.500 15:15:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:42.500 15:15:56 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.500 15:15:56 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:42.500 15:15:56 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:42.500 15:15:56 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.500 15:15:56 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:42.500 15:15:56 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.500 15:15:56 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.500 15:15:56 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.500 15:15:56 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.500 15:15:56 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.500 15:15:56 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.500 15:15:56 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.500 15:15:56 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.500 15:15:56 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:42.500 15:15:56 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.758 request: 00:07:42.758 { 00:07:42.758 "method": "env_dpdk_get_mem_stats", 00:07:42.758 "req_id": 1 00:07:42.758 } 00:07:42.758 Got JSON-RPC error response 00:07:42.758 response: 00:07:42.758 { 00:07:42.758 "code": -32601, 00:07:42.758 "message": "Method not found" 00:07:42.758 } 00:07:42.758 15:15:56 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:42.758 15:15:56 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:42.758 15:15:56 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:42.758 15:15:56 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:42.758 15:15:56 app_cmdline -- app/cmdline.sh@1 -- # killprocess 65880 00:07:42.758 15:15:56 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 65880 ']' 00:07:42.758 15:15:56 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 65880 00:07:42.758 15:15:56 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:42.758 15:15:56 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:42.758 15:15:56 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65880 00:07:42.758 killing process with pid 65880 00:07:42.758 15:15:56 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:42.758 15:15:56 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:42.758 15:15:56 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65880' 00:07:42.758 15:15:56 app_cmdline -- common/autotest_common.sh@967 -- # kill 65880 00:07:42.758 15:15:56 app_cmdline -- common/autotest_common.sh@972 -- # wait 65880 00:07:44.659 ************************************ 00:07:44.659 END TEST app_cmdline 00:07:44.659 ************************************ 00:07:44.659 00:07:44.659 real 0m3.617s 00:07:44.659 user 0m4.043s 00:07:44.659 sys 0m0.504s 00:07:44.659 15:15:58 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.659 15:15:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:44.659 15:15:58 -- common/autotest_common.sh@1142 -- # return 0 00:07:44.659 15:15:58 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:44.659 15:15:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.659 15:15:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.659 15:15:58 -- common/autotest_common.sh@10 -- # set +x 00:07:44.659 ************************************ 00:07:44.659 START TEST version 00:07:44.659 ************************************ 00:07:44.659 15:15:58 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:44.917 * Looking for test storage... 00:07:44.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:44.917 15:15:58 version -- app/version.sh@17 -- # get_header_version major 00:07:44.917 15:15:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:44.917 15:15:58 version -- app/version.sh@14 -- # tr -d '"' 00:07:44.917 15:15:58 version -- app/version.sh@14 -- # cut -f2 00:07:44.917 15:15:58 version -- app/version.sh@17 -- # major=24 00:07:44.917 15:15:58 version -- app/version.sh@18 -- # get_header_version minor 00:07:44.917 15:15:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:44.917 15:15:58 version -- app/version.sh@14 -- # cut -f2 00:07:44.917 15:15:58 version -- app/version.sh@14 -- # tr -d '"' 00:07:44.917 15:15:58 version -- app/version.sh@18 -- # minor=9 00:07:44.917 15:15:58 version -- app/version.sh@19 -- # get_header_version patch 00:07:44.917 15:15:58 version -- app/version.sh@14 -- # cut -f2 00:07:44.917 15:15:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:44.917 15:15:58 version -- app/version.sh@14 -- # tr -d '"' 00:07:44.917 15:15:58 version -- app/version.sh@19 -- # patch=0 00:07:44.917 15:15:58 version -- app/version.sh@20 -- # get_header_version suffix 00:07:44.917 15:15:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:44.917 15:15:58 version -- app/version.sh@14 -- # cut -f2 00:07:44.917 15:15:58 version -- app/version.sh@14 -- # tr -d '"' 00:07:44.917 15:15:58 version -- app/version.sh@20 -- # suffix=-pre 00:07:44.917 15:15:58 version -- app/version.sh@22 -- # version=24.9 00:07:44.918 15:15:58 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:44.918 15:15:58 version -- app/version.sh@28 -- # version=24.9rc0 00:07:44.918 15:15:58 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:44.918 15:15:58 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:44.918 15:15:58 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:44.918 15:15:58 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:44.918 00:07:44.918 real 0m0.150s 00:07:44.918 user 0m0.088s 00:07:44.918 sys 0m0.093s 00:07:44.918 ************************************ 00:07:44.918 END TEST version 00:07:44.918 ************************************ 00:07:44.918 15:15:58 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.918 15:15:58 version -- common/autotest_common.sh@10 -- # set +x 00:07:44.918 15:15:58 -- common/autotest_common.sh@1142 -- # return 0 00:07:44.918 15:15:58 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:44.918 15:15:58 -- spdk/autotest.sh@198 -- # uname -s 00:07:44.918 15:15:58 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:44.918 15:15:58 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:44.918 15:15:58 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:44.918 15:15:58 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:07:44.918 15:15:58 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:44.918 15:15:58 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:44.918 15:15:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.918 15:15:58 -- common/autotest_common.sh@10 -- # set +x 00:07:44.918 ************************************ 00:07:44.918 START TEST blockdev_nvme 00:07:44.918 ************************************ 00:07:44.918 15:15:58 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:44.918 * Looking for test storage... 00:07:44.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:44.918 15:15:58 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:44.918 15:15:58 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:44.918 15:15:58 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:44.918 15:15:58 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:44.918 15:15:58 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:44.918 15:15:58 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:44.918 15:15:58 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:44.918 15:15:58 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:44.918 15:15:58 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:44.918 15:15:58 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:07:44.918 15:15:58 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:07:44.918 15:15:58 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:07:44.918 15:15:58 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:07:45.176 15:15:58 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:07:45.176 15:15:58 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:07:45.176 15:15:58 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:07:45.176 15:15:58 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:07:45.176 15:15:58 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:07:45.176 15:15:58 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:07:45.176 15:15:58 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:07:45.176 15:15:58 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:07:45.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.176 15:15:58 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:07:45.176 15:15:58 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:07:45.176 15:15:58 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:07:45.176 15:15:58 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66047 00:07:45.176 15:15:58 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:45.176 15:15:58 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 66047 00:07:45.176 15:15:58 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 66047 ']' 00:07:45.176 15:15:58 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:45.176 15:15:58 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.176 15:15:58 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:45.176 15:15:58 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.176 15:15:58 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:45.176 15:15:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.176 [2024-07-11 15:15:58.654481] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:45.176 [2024-07-11 15:15:58.654947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66047 ] 00:07:45.435 [2024-07-11 15:15:58.827122] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.435 [2024-07-11 15:15:58.975854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.001 15:15:59 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.001 15:15:59 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:07:46.001 15:15:59 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:07:46.001 15:15:59 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:07:46.001 15:15:59 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:46.001 15:15:59 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:46.001 15:15:59 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:46.259 15:15:59 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:46.259 15:15:59 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.259 15:15:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.518 15:15:59 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.518 15:15:59 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:07:46.518 15:15:59 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.518 15:15:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.518 15:15:59 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.518 15:15:59 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:07:46.518 15:15:59 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:07:46.518 15:15:59 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.518 15:15:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.518 15:15:59 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.518 15:15:59 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:07:46.518 15:15:59 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.518 15:15:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.518 15:15:59 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.518 15:15:59 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:46.518 15:15:59 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.518 15:15:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.518 15:16:00 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.518 15:16:00 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:07:46.518 15:16:00 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:07:46.518 15:16:00 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:07:46.518 15:16:00 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.518 15:16:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.518 15:16:00 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.518 15:16:00 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:07:46.518 15:16:00 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:07:46.519 15:16:00 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "0721fd88-6e3b-40fb-ac6c-20a00e0ed795"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0721fd88-6e3b-40fb-ac6c-20a00e0ed795",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "d4a1776e-8372-4a36-ba89-874c14784f22"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d4a1776e-8372-4a36-ba89-874c14784f22",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "9b7680b8-f5b9-46af-8086-2e1cbc1ad077"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9b7680b8-f5b9-46af-8086-2e1cbc1ad077",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "95418eca-ebfb-4271-93b1-ba5ac1689088"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "95418eca-ebfb-4271-93b1-ba5ac1689088",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "371d715a-5eab-4989-a8ea-12ee702e4e49"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "371d715a-5eab-4989-a8ea-12ee702e4e49",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "92d77bf2-8557-4056-a934-42db79494bce"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "92d77bf2-8557-4056-a934-42db79494bce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:46.777 15:16:00 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:07:46.777 15:16:00 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:07:46.777 15:16:00 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:07:46.777 15:16:00 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 66047 00:07:46.777 15:16:00 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 66047 ']' 00:07:46.777 15:16:00 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 66047 00:07:46.777 15:16:00 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:07:46.777 15:16:00 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:46.777 15:16:00 blockdev_nvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66047 00:07:46.777 15:16:00 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:46.777 15:16:00 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:46.777 killing process with pid 66047 00:07:46.777 15:16:00 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66047' 00:07:46.777 15:16:00 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 66047 00:07:46.777 15:16:00 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 66047 00:07:48.676 15:16:01 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:48.676 15:16:01 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:48.676 15:16:01 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:48.676 15:16:01 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.676 15:16:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:48.676 ************************************ 00:07:48.676 START TEST bdev_hello_world 00:07:48.676 ************************************ 00:07:48.676 15:16:01 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:48.676 [2024-07-11 15:16:02.026123] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:48.676 [2024-07-11 15:16:02.026289] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66131 ] 00:07:48.676 [2024-07-11 15:16:02.195913] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.934 [2024-07-11 15:16:02.362202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.501 [2024-07-11 15:16:02.919835] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:49.501 [2024-07-11 15:16:02.919907] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:49.501 [2024-07-11 15:16:02.919949] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:49.501 [2024-07-11 15:16:02.923029] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:49.501 [2024-07-11 15:16:02.923487] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:49.501 [2024-07-11 15:16:02.923520] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:49.501 [2024-07-11 15:16:02.923815] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:49.501 00:07:49.501 [2024-07-11 15:16:02.923866] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:50.434 ************************************ 00:07:50.434 END TEST bdev_hello_world 00:07:50.434 ************************************ 00:07:50.434 00:07:50.434 real 0m1.956s 00:07:50.434 user 0m1.628s 00:07:50.434 sys 0m0.221s 00:07:50.434 15:16:03 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.434 15:16:03 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:50.434 15:16:03 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:07:50.434 15:16:03 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:07:50.434 15:16:03 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:50.434 15:16:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.434 15:16:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:50.434 ************************************ 00:07:50.434 START TEST bdev_bounds 00:07:50.434 ************************************ 00:07:50.434 15:16:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:07:50.434 15:16:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=66173 00:07:50.434 15:16:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:50.434 Process bdevio pid: 66173 00:07:50.434 15:16:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 66173' 00:07:50.434 15:16:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 66173 00:07:50.434 15:16:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:50.434 15:16:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 66173 ']' 00:07:50.434 15:16:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.434 15:16:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.434 15:16:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.434 15:16:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.434 15:16:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:50.434 [2024-07-11 15:16:04.044202] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:50.434 [2024-07-11 15:16:04.044680] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66173 ] 00:07:50.693 [2024-07-11 15:16:04.215212] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.951 [2024-07-11 15:16:04.368801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.951 [2024-07-11 15:16:04.368911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.951 [2024-07-11 15:16:04.368936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.519 15:16:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.519 15:16:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:07:51.519 15:16:05 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:51.519 I/O targets: 00:07:51.519 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:51.519 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:51.519 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:51.519 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:51.519 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:51.519 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:51.519 00:07:51.519 00:07:51.519 CUnit - A unit testing framework for C - Version 2.1-3 00:07:51.519 http://cunit.sourceforge.net/ 00:07:51.519 00:07:51.519 00:07:51.519 Suite: bdevio tests on: Nvme3n1 00:07:51.519 Test: blockdev write read block ...passed 00:07:51.519 Test: blockdev write zeroes read block ...passed 00:07:51.519 Test: blockdev write zeroes read no split ...passed 00:07:51.778 Test: blockdev write zeroes read split ...passed 00:07:51.778 Test: blockdev write zeroes read split partial ...passed 00:07:51.778 Test: blockdev reset ...[2024-07-11 15:16:05.189698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:51.778 [2024-07-11 15:16:05.193653] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:51.778 passed 00:07:51.778 Test: blockdev write read 8 blocks ...passed 00:07:51.778 Test: blockdev write read size > 128k ...passed 00:07:51.778 Test: blockdev write read invalid size ...passed 00:07:51.778 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.778 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.778 Test: blockdev write read max offset ...passed 00:07:51.779 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.779 Test: blockdev writev readv 8 blocks ...passed 00:07:51.779 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.779 Test: blockdev writev readv block ...passed 00:07:51.779 Test: blockdev writev readv size > 128k ...passed 00:07:51.779 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.779 Test: blockdev comparev and writev ...[2024-07-11 15:16:05.203581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26e60a000 len:0x1000 00:07:51.779 [2024-07-11 15:16:05.203672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:51.779 passed 00:07:51.779 Test: blockdev nvme passthru rw ...passed 00:07:51.779 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.779 Test: blockdev nvme admin passthru ...[2024-07-11 15:16:05.204635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:51.779 [2024-07-11 15:16:05.204684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:51.779 passed 00:07:51.779 Test: blockdev copy ...passed 00:07:51.779 Suite: bdevio tests on: Nvme2n3 00:07:51.779 Test: blockdev write read block ...passed 00:07:51.779 Test: blockdev write zeroes read block ...passed 00:07:51.779 Test: blockdev write zeroes read no split ...passed 00:07:51.779 Test: blockdev write zeroes read split ...passed 00:07:51.779 Test: blockdev write zeroes read split partial ...passed 00:07:51.779 Test: blockdev reset ...[2024-07-11 15:16:05.261598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:51.779 [2024-07-11 15:16:05.265610] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:51.779 passed 00:07:51.779 Test: blockdev write read 8 blocks ...passed 00:07:51.779 Test: blockdev write read size > 128k ...passed 00:07:51.779 Test: blockdev write read invalid size ...passed 00:07:51.779 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.779 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.779 Test: blockdev write read max offset ...passed 00:07:51.779 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.779 Test: blockdev writev readv 8 blocks ...passed 00:07:51.779 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.779 Test: blockdev writev readv block ...passed 00:07:51.779 Test: blockdev writev readv size > 128k ...passed 00:07:51.779 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.779 Test: blockdev comparev and writev ...[2024-07-11 15:16:05.274953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x265404000 len:0x1000 00:07:51.779 [2024-07-11 15:16:05.275067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:51.779 passed 00:07:51.779 Test: blockdev nvme passthru rw ...passed 00:07:51.779 Test: blockdev nvme passthru vendor specific ...[2024-07-11 15:16:05.275936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:51.779 [2024-07-11 15:16:05.275999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:51.779 passed 00:07:51.779 Test: blockdev nvme admin passthru ...passed 00:07:51.779 Test: blockdev copy ...passed 00:07:51.779 Suite: bdevio tests on: Nvme2n2 00:07:51.779 Test: blockdev write read block ...passed 00:07:51.779 Test: blockdev write zeroes read block ...passed 00:07:51.779 Test: blockdev write zeroes read no split ...passed 00:07:51.779 Test: blockdev write zeroes read split ...passed 00:07:51.779 Test: blockdev write zeroes read split partial ...passed 00:07:51.779 Test: blockdev reset ...[2024-07-11 15:16:05.330901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:51.779 [2024-07-11 15:16:05.334861] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:51.779 passed 00:07:51.779 Test: blockdev write read 8 blocks ...passed 00:07:51.779 Test: blockdev write read size > 128k ...passed 00:07:51.779 Test: blockdev write read invalid size ...passed 00:07:51.779 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.779 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.779 Test: blockdev write read max offset ...passed 00:07:51.779 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.779 Test: blockdev writev readv 8 blocks ...passed 00:07:51.779 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.779 Test: blockdev writev readv block ...passed 00:07:51.779 Test: blockdev writev readv size > 128k ...passed 00:07:51.779 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.779 Test: blockdev comparev and writev ...[2024-07-11 15:16:05.344515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27683a000 len:0x1000 00:07:51.779 [2024-07-11 15:16:05.344585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:51.779 passed 00:07:51.779 Test: blockdev nvme passthru rw ...passed 00:07:51.779 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.779 Test: blockdev nvme admin passthru ...[2024-07-11 15:16:05.345381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:51.779 [2024-07-11 15:16:05.345459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:51.779 passed 00:07:51.779 Test: blockdev copy ...passed 00:07:51.779 Suite: bdevio tests on: Nvme2n1 00:07:51.779 Test: blockdev write read block ...passed 00:07:51.779 Test: blockdev write zeroes read block ...passed 00:07:51.779 Test: blockdev write zeroes read no split ...passed 00:07:51.779 Test: blockdev write zeroes read split ...passed 00:07:52.038 Test: blockdev write zeroes read split partial ...passed 00:07:52.038 Test: blockdev reset ...[2024-07-11 15:16:05.404245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:52.038 [2024-07-11 15:16:05.408256] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:52.038 passed 00:07:52.038 Test: blockdev write read 8 blocks ...passed 00:07:52.038 Test: blockdev write read size > 128k ...passed 00:07:52.038 Test: blockdev write read invalid size ...passed 00:07:52.038 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.038 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.038 Test: blockdev write read max offset ...passed 00:07:52.038 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.038 Test: blockdev writev readv 8 blocks ...passed 00:07:52.038 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.038 Test: blockdev writev readv block ...passed 00:07:52.038 Test: blockdev writev readv size > 128k ...passed 00:07:52.038 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.038 Test: blockdev comparev and writev ...[2024-07-11 15:16:05.417271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x276834000 len:0x1000 00:07:52.038 [2024-07-11 15:16:05.417345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:52.038 passed 00:07:52.038 Test: blockdev nvme passthru rw ...passed 00:07:52.038 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.038 Test: blockdev nvme admin passthru ...[2024-07-11 15:16:05.418454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:52.038 [2024-07-11 15:16:05.418519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:52.038 passed 00:07:52.038 Test: blockdev copy ...passed 00:07:52.038 Suite: bdevio tests on: Nvme1n1 00:07:52.038 Test: blockdev write read block ...passed 00:07:52.038 Test: blockdev write zeroes read block ...passed 00:07:52.038 Test: blockdev write zeroes read no split ...passed 00:07:52.038 Test: blockdev write zeroes read split ...passed 00:07:52.038 Test: blockdev write zeroes read split partial ...passed 00:07:52.038 Test: blockdev reset ...[2024-07-11 15:16:05.483277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:52.038 [2024-07-11 15:16:05.487322] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:52.038 passed 00:07:52.038 Test: blockdev write read 8 blocks ...passed 00:07:52.038 Test: blockdev write read size > 128k ...passed 00:07:52.038 Test: blockdev write read invalid size ...passed 00:07:52.038 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.038 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.038 Test: blockdev write read max offset ...passed 00:07:52.038 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.038 Test: blockdev writev readv 8 blocks ...passed 00:07:52.038 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.038 Test: blockdev writev readv block ...passed 00:07:52.038 Test: blockdev writev readv size > 128k ...passed 00:07:52.038 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.038 Test: blockdev comparev and writev ...[2024-07-11 15:16:05.496008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x276830000 len:0x1000 00:07:52.038 [2024-07-11 15:16:05.496098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:52.038 passed 00:07:52.038 Test: blockdev nvme passthru rw ...passed 00:07:52.038 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.038 Test: blockdev nvme admin passthru ...[2024-07-11 15:16:05.496956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:52.038 [2024-07-11 15:16:05.497012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:52.038 passed 00:07:52.038 Test: blockdev copy ...passed 00:07:52.038 Suite: bdevio tests on: Nvme0n1 00:07:52.038 Test: blockdev write read block ...passed 00:07:52.038 Test: blockdev write zeroes read block ...passed 00:07:52.038 Test: blockdev write zeroes read no split ...passed 00:07:52.038 Test: blockdev write zeroes read split ...passed 00:07:52.038 Test: blockdev write zeroes read split partial ...passed 00:07:52.038 Test: blockdev reset ...[2024-07-11 15:16:05.560894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:52.038 [2024-07-11 15:16:05.564765] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:52.038 passed 00:07:52.038 Test: blockdev write read 8 blocks ...passed 00:07:52.038 Test: blockdev write read size > 128k ...passed 00:07:52.038 Test: blockdev write read invalid size ...passed 00:07:52.038 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.038 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.038 Test: blockdev write read max offset ...passed 00:07:52.038 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.038 Test: blockdev writev readv 8 blocks ...passed 00:07:52.038 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.038 Test: blockdev writev readv block ...passed 00:07:52.038 Test: blockdev writev readv size > 128k ...passed 00:07:52.039 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.039 Test: blockdev comparev and writev ...passed 00:07:52.039 Test: blockdev nvme passthru rw ...[2024-07-11 15:16:05.572548] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:52.039 separate metadata which is not supported yet. 00:07:52.039 passed 00:07:52.039 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.039 Test: blockdev nvme admin passthru ...[2024-07-11 15:16:05.573104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:52.039 [2024-07-11 15:16:05.573167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:52.039 passed 00:07:52.039 Test: blockdev copy ...passed 00:07:52.039 00:07:52.039 Run Summary: Type Total Ran Passed Failed Inactive 00:07:52.039 suites 6 6 n/a 0 0 00:07:52.039 tests 138 138 138 0 0 00:07:52.039 asserts 893 893 893 0 n/a 00:07:52.039 00:07:52.039 Elapsed time = 1.197 seconds 00:07:52.039 0 00:07:52.039 15:16:05 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 66173 00:07:52.039 15:16:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 66173 ']' 00:07:52.039 15:16:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 66173 00:07:52.039 15:16:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:07:52.039 15:16:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:52.039 15:16:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66173 00:07:52.039 15:16:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:52.039 15:16:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:52.039 15:16:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66173' 00:07:52.039 killing process with pid 66173 00:07:52.039 15:16:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 66173 00:07:52.039 15:16:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 66173 00:07:52.976 15:16:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:07:52.976 00:07:52.976 real 0m2.550s 00:07:52.976 user 0m6.313s 00:07:52.976 sys 0m0.347s 00:07:52.976 15:16:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.976 15:16:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:52.976 ************************************ 00:07:52.976 END TEST bdev_bounds 00:07:52.976 ************************************ 00:07:52.976 15:16:06 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:07:52.976 15:16:06 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:52.976 15:16:06 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:52.976 15:16:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.976 15:16:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.976 ************************************ 00:07:52.976 START TEST bdev_nbd 00:07:52.976 ************************************ 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:07:52.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=66237 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 66237 /var/tmp/spdk-nbd.sock 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 66237 ']' 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.976 15:16:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:53.234 [2024-07-11 15:16:06.637285] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:53.234 [2024-07-11 15:16:06.637645] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.234 [2024-07-11 15:16:06.797483] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.493 [2024-07-11 15:16:06.952474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.093 15:16:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.093 15:16:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:07:54.093 15:16:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:54.093 15:16:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.093 15:16:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:54.093 15:16:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:54.093 15:16:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:54.093 15:16:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.093 15:16:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:54.093 15:16:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:54.093 15:16:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:54.093 15:16:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:54.093 15:16:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:54.093 15:16:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:54.093 15:16:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:54.352 15:16:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:54.352 15:16:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:54.352 15:16:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:54.352 15:16:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:54.352 15:16:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:54.352 15:16:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:54.352 15:16:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:54.352 15:16:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:54.352 15:16:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:54.352 15:16:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:54.352 15:16:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:54.352 15:16:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.352 1+0 records in 00:07:54.352 1+0 records out 00:07:54.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000745613 s, 5.5 MB/s 00:07:54.352 15:16:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.352 15:16:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:54.352 15:16:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.352 15:16:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:54.352 15:16:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:54.352 15:16:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:54.352 15:16:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:54.352 15:16:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:54.611 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:54.611 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:54.611 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:54.611 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:54.611 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:54.611 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:54.611 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:54.611 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:54.611 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:54.611 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:54.611 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:54.611 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.611 1+0 records in 00:07:54.611 1+0 records out 00:07:54.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576419 s, 7.1 MB/s 00:07:54.611 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.611 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:54.611 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.611 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:54.611 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:54.611 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:54.611 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:54.611 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:54.869 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:54.869 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:54.869 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:54.869 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:54.869 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:54.869 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:54.869 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:54.869 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:54.869 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:54.869 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:54.869 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:54.869 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.869 1+0 records in 00:07:54.869 1+0 records out 00:07:54.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000740422 s, 5.5 MB/s 00:07:54.869 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.869 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:54.870 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.870 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:54.870 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:54.870 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:54.870 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:54.870 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:55.129 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:55.129 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:55.129 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:55.129 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:55.129 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:55.129 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:55.129 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:55.129 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:55.129 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:55.129 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:55.129 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:55.129 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.129 1+0 records in 00:07:55.129 1+0 records out 00:07:55.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000832831 s, 4.9 MB/s 00:07:55.129 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.129 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:55.129 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.129 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:55.129 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:55.129 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.129 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:55.129 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:55.389 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:55.389 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:55.389 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:55.389 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:55.389 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:55.389 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:55.389 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:55.389 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:55.389 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:55.389 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:55.389 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:55.389 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.389 1+0 records in 00:07:55.389 1+0 records out 00:07:55.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000892323 s, 4.6 MB/s 00:07:55.389 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.389 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:55.389 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.389 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:55.389 15:16:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:55.389 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.389 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:55.389 15:16:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:55.648 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:55.648 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:55.648 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:55.648 15:16:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:55.648 15:16:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:55.648 15:16:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:55.648 15:16:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:55.648 15:16:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:55.648 15:16:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:55.648 15:16:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:55.648 15:16:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:55.648 15:16:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.648 1+0 records in 00:07:55.648 1+0 records out 00:07:55.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000954197 s, 4.3 MB/s 00:07:55.649 15:16:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.649 15:16:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:55.649 15:16:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.649 15:16:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:55.649 15:16:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:55.649 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.649 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:55.649 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:55.908 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:55.908 { 00:07:55.908 "nbd_device": "/dev/nbd0", 00:07:55.908 "bdev_name": "Nvme0n1" 00:07:55.908 }, 00:07:55.908 { 00:07:55.908 "nbd_device": "/dev/nbd1", 00:07:55.908 "bdev_name": "Nvme1n1" 00:07:55.908 }, 00:07:55.908 { 00:07:55.908 "nbd_device": "/dev/nbd2", 00:07:55.908 "bdev_name": "Nvme2n1" 00:07:55.908 }, 00:07:55.908 { 00:07:55.908 "nbd_device": "/dev/nbd3", 00:07:55.908 "bdev_name": "Nvme2n2" 00:07:55.908 }, 00:07:55.908 { 00:07:55.908 "nbd_device": "/dev/nbd4", 00:07:55.908 "bdev_name": "Nvme2n3" 00:07:55.908 }, 00:07:55.908 { 00:07:55.908 "nbd_device": "/dev/nbd5", 00:07:55.908 "bdev_name": "Nvme3n1" 00:07:55.908 } 00:07:55.908 ]' 00:07:55.908 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:55.908 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:55.908 { 00:07:55.908 "nbd_device": "/dev/nbd0", 00:07:55.908 "bdev_name": "Nvme0n1" 00:07:55.908 }, 00:07:55.908 { 00:07:55.908 "nbd_device": "/dev/nbd1", 00:07:55.908 "bdev_name": "Nvme1n1" 00:07:55.908 }, 00:07:55.908 { 00:07:55.908 "nbd_device": "/dev/nbd2", 00:07:55.908 "bdev_name": "Nvme2n1" 00:07:55.908 }, 00:07:55.908 { 00:07:55.908 "nbd_device": "/dev/nbd3", 00:07:55.908 "bdev_name": "Nvme2n2" 00:07:55.908 }, 00:07:55.908 { 00:07:55.908 "nbd_device": "/dev/nbd4", 00:07:55.908 "bdev_name": "Nvme2n3" 00:07:55.908 }, 00:07:55.908 { 00:07:55.908 "nbd_device": "/dev/nbd5", 00:07:55.908 "bdev_name": "Nvme3n1" 00:07:55.908 } 00:07:55.908 ]' 00:07:55.908 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:55.908 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:55.908 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.908 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:55.908 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:55.908 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:55.908 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.908 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:56.476 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:56.476 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:56.476 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:56.476 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.476 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.476 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:56.476 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:56.476 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.476 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.476 15:16:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:56.476 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:56.476 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:56.476 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:56.476 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.476 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.476 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:56.476 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:56.476 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.476 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.476 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:56.735 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:56.735 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:56.735 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:56.735 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.735 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.735 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:56.735 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:56.735 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.735 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.735 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:56.994 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:56.994 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:56.994 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:56.994 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.994 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.994 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:56.994 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:56.994 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.994 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.994 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:57.253 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:57.253 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:57.253 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:57.253 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.253 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.253 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:57.253 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:57.253 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.253 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.253 15:16:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:57.512 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:57.512 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:57.512 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:57.512 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.512 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.512 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:57.512 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:57.512 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.512 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:57.512 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.512 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:57.771 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:58.029 /dev/nbd0 00:07:58.029 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:58.029 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:58.029 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:58.029 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:58.029 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:58.029 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:58.029 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:58.029 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:58.029 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:58.029 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:58.029 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.029 1+0 records in 00:07:58.029 1+0 records out 00:07:58.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00065114 s, 6.3 MB/s 00:07:58.029 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.029 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:58.029 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.029 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:58.029 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:58.029 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.029 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:58.029 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:58.288 /dev/nbd1 00:07:58.288 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:58.288 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:58.288 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:58.288 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:58.288 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:58.288 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:58.288 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:58.288 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:58.288 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:58.289 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:58.289 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.289 1+0 records in 00:07:58.289 1+0 records out 00:07:58.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610753 s, 6.7 MB/s 00:07:58.289 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.289 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:58.289 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.289 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:58.289 15:16:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:58.289 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.289 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:58.289 15:16:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:58.548 /dev/nbd10 00:07:58.548 15:16:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:58.548 15:16:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:58.548 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:58.548 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:58.548 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:58.548 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:58.548 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:58.548 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:58.548 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:58.548 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:58.548 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.548 1+0 records in 00:07:58.548 1+0 records out 00:07:58.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589746 s, 6.9 MB/s 00:07:58.548 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.548 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:58.548 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.548 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:58.548 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:58.548 15:16:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.548 15:16:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:58.548 15:16:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:58.806 /dev/nbd11 00:07:58.806 15:16:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:58.806 15:16:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:58.806 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:58.806 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:58.806 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:58.807 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:58.807 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:59.065 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:59.065 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:59.065 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:59.065 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.065 1+0 records in 00:07:59.065 1+0 records out 00:07:59.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000590477 s, 6.9 MB/s 00:07:59.065 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.065 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:59.065 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.065 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:59.065 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:59.065 15:16:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.065 15:16:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:59.065 15:16:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:59.323 /dev/nbd12 00:07:59.323 15:16:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:59.323 15:16:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:59.323 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:59.323 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:59.323 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:59.323 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:59.323 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:59.323 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:59.323 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:59.323 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:59.323 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.323 1+0 records in 00:07:59.323 1+0 records out 00:07:59.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000804026 s, 5.1 MB/s 00:07:59.323 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.323 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:59.323 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.323 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:59.323 15:16:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:59.323 15:16:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.323 15:16:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:59.323 15:16:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:59.581 /dev/nbd13 00:07:59.581 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:59.581 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:59.581 15:16:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:59.581 15:16:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:59.581 15:16:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:59.581 15:16:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:59.581 15:16:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:59.581 15:16:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:59.581 15:16:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:59.581 15:16:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:59.581 15:16:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.581 1+0 records in 00:07:59.581 1+0 records out 00:07:59.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514034 s, 8.0 MB/s 00:07:59.581 15:16:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.582 15:16:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:59.582 15:16:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.582 15:16:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:59.582 15:16:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:59.582 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.582 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:59.582 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:59.582 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.582 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:59.840 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:59.840 { 00:07:59.840 "nbd_device": "/dev/nbd0", 00:07:59.840 "bdev_name": "Nvme0n1" 00:07:59.840 }, 00:07:59.840 { 00:07:59.840 "nbd_device": "/dev/nbd1", 00:07:59.840 "bdev_name": "Nvme1n1" 00:07:59.840 }, 00:07:59.840 { 00:07:59.840 "nbd_device": "/dev/nbd10", 00:07:59.840 "bdev_name": "Nvme2n1" 00:07:59.840 }, 00:07:59.840 { 00:07:59.840 "nbd_device": "/dev/nbd11", 00:07:59.840 "bdev_name": "Nvme2n2" 00:07:59.840 }, 00:07:59.840 { 00:07:59.840 "nbd_device": "/dev/nbd12", 00:07:59.840 "bdev_name": "Nvme2n3" 00:07:59.840 }, 00:07:59.840 { 00:07:59.840 "nbd_device": "/dev/nbd13", 00:07:59.840 "bdev_name": "Nvme3n1" 00:07:59.840 } 00:07:59.840 ]' 00:07:59.840 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:59.840 { 00:07:59.840 "nbd_device": "/dev/nbd0", 00:07:59.840 "bdev_name": "Nvme0n1" 00:07:59.840 }, 00:07:59.840 { 00:07:59.840 "nbd_device": "/dev/nbd1", 00:07:59.840 "bdev_name": "Nvme1n1" 00:07:59.840 }, 00:07:59.840 { 00:07:59.840 "nbd_device": "/dev/nbd10", 00:07:59.840 "bdev_name": "Nvme2n1" 00:07:59.840 }, 00:07:59.840 { 00:07:59.840 "nbd_device": "/dev/nbd11", 00:07:59.840 "bdev_name": "Nvme2n2" 00:07:59.840 }, 00:07:59.840 { 00:07:59.840 "nbd_device": "/dev/nbd12", 00:07:59.840 "bdev_name": "Nvme2n3" 00:07:59.840 }, 00:07:59.840 { 00:07:59.840 "nbd_device": "/dev/nbd13", 00:07:59.840 "bdev_name": "Nvme3n1" 00:07:59.840 } 00:07:59.840 ]' 00:07:59.840 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:59.840 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:59.840 /dev/nbd1 00:07:59.840 /dev/nbd10 00:07:59.840 /dev/nbd11 00:07:59.840 /dev/nbd12 00:07:59.840 /dev/nbd13' 00:07:59.840 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:59.840 /dev/nbd1 00:07:59.840 /dev/nbd10 00:07:59.840 /dev/nbd11 00:07:59.840 /dev/nbd12 00:07:59.840 /dev/nbd13' 00:07:59.840 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:59.840 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:59.840 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:59.840 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:59.840 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:59.840 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:59.840 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:59.840 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:59.840 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:59.840 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:59.840 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:59.840 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:59.840 256+0 records in 00:07:59.840 256+0 records out 00:07:59.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103752 s, 101 MB/s 00:07:59.840 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.840 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:00.099 256+0 records in 00:08:00.099 256+0 records out 00:08:00.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.17018 s, 6.2 MB/s 00:08:00.099 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:00.099 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:00.358 256+0 records in 00:08:00.358 256+0 records out 00:08:00.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.182294 s, 5.8 MB/s 00:08:00.358 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:00.358 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:00.358 256+0 records in 00:08:00.358 256+0 records out 00:08:00.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168953 s, 6.2 MB/s 00:08:00.358 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:00.358 15:16:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:00.616 256+0 records in 00:08:00.616 256+0 records out 00:08:00.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.194463 s, 5.4 MB/s 00:08:00.616 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:00.616 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:00.874 256+0 records in 00:08:00.875 256+0 records out 00:08:00.875 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152767 s, 6.9 MB/s 00:08:00.875 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:00.875 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:00.875 256+0 records in 00:08:00.875 256+0 records out 00:08:00.875 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173112 s, 6.1 MB/s 00:08:00.875 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:00.875 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:00.875 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:00.875 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:00.875 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:00.875 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:00.875 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:00.875 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.875 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:00.875 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.875 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:00.875 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.875 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:00.875 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.875 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:01.133 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:01.133 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:01.133 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:01.133 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:01.133 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:01.133 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:01.133 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.133 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:01.133 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:01.133 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:01.133 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.133 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:01.392 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:01.392 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:01.392 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:01.392 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.392 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.392 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:01.392 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.392 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.392 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.392 15:16:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:01.649 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:01.649 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:01.649 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:01.649 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.649 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.649 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:01.649 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.649 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.649 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.649 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:01.907 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:01.907 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:01.907 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:01.907 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.907 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.907 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:01.907 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.907 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.907 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.907 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:02.165 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:02.166 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:02.166 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:02.166 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.166 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.166 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:02.166 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.166 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.166 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.166 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:02.424 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:02.424 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:02.424 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:02.424 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.424 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.424 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:02.424 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.424 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.424 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.424 15:16:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:02.683 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:02.683 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:02.683 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:02.683 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.683 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.683 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:02.683 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.683 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.683 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:02.683 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.683 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:02.941 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:02.941 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:02.941 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:02.941 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:02.941 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:02.941 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:02.941 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:02.941 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:02.941 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:02.941 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:02.941 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:02.941 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:02.941 15:16:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:02.941 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.941 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:02.941 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:02.941 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:02.941 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:03.200 malloc_lvol_verify 00:08:03.200 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:03.458 54d9a484-aca8-410c-a436-826efa77938d 00:08:03.458 15:16:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:03.717 16094dd2-e45c-4fdb-a51f-0570336f4949 00:08:03.717 15:16:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:03.976 /dev/nbd0 00:08:03.976 15:16:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:03.976 mke2fs 1.46.5 (30-Dec-2021) 00:08:03.976 Discarding device blocks: 0/4096 done 00:08:03.976 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:03.976 00:08:03.976 Allocating group tables: 0/1 done 00:08:03.976 Writing inode tables: 0/1 done 00:08:03.976 Creating journal (1024 blocks): done 00:08:03.976 Writing superblocks and filesystem accounting information: 0/1 done 00:08:03.976 00:08:03.976 15:16:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:03.976 15:16:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:03.976 15:16:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.976 15:16:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:03.976 15:16:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:03.976 15:16:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:03.976 15:16:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.976 15:16:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:03.976 15:16:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:03.976 15:16:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:03.976 15:16:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:03.976 15:16:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.976 15:16:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.234 15:16:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:04.234 15:16:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:04.234 15:16:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.234 15:16:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:04.234 15:16:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:04.234 15:16:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 66237 00:08:04.234 15:16:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 66237 ']' 00:08:04.234 15:16:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 66237 00:08:04.234 15:16:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:08:04.234 15:16:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:04.234 15:16:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66237 00:08:04.234 15:16:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:04.234 killing process with pid 66237 00:08:04.234 15:16:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:04.234 15:16:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66237' 00:08:04.234 15:16:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 66237 00:08:04.234 15:16:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 66237 00:08:05.169 15:16:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:08:05.169 00:08:05.169 real 0m12.092s 00:08:05.169 user 0m16.966s 00:08:05.169 sys 0m3.866s 00:08:05.169 15:16:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.169 15:16:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:05.169 ************************************ 00:08:05.169 END TEST bdev_nbd 00:08:05.169 ************************************ 00:08:05.169 15:16:18 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:05.169 15:16:18 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:08:05.169 15:16:18 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:08:05.169 skipping fio tests on NVMe due to multi-ns failures. 00:08:05.169 15:16:18 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:05.169 15:16:18 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:05.169 15:16:18 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:05.169 15:16:18 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:05.169 15:16:18 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.170 15:16:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.170 ************************************ 00:08:05.170 START TEST bdev_verify 00:08:05.170 ************************************ 00:08:05.170 15:16:18 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:05.170 [2024-07-11 15:16:18.775730] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:05.170 [2024-07-11 15:16:18.775892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66628 ] 00:08:05.429 [2024-07-11 15:16:18.934698] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:05.687 [2024-07-11 15:16:19.092907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.687 [2024-07-11 15:16:19.092925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.253 Running I/O for 5 seconds... 00:08:11.565 00:08:11.565 Latency(us) 00:08:11.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.565 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:11.565 Verification LBA range: start 0x0 length 0xbd0bd 00:08:11.565 Nvme0n1 : 5.05 1519.61 5.94 0.00 0.00 83949.37 17754.30 95325.09 00:08:11.565 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:11.565 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:11.565 Nvme0n1 : 5.06 1519.00 5.93 0.00 0.00 83557.44 20494.89 69587.32 00:08:11.565 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:11.565 Verification LBA range: start 0x0 length 0xa0000 00:08:11.565 Nvme1n1 : 5.06 1518.45 5.93 0.00 0.00 83794.44 20733.21 90558.84 00:08:11.565 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:11.565 Verification LBA range: start 0xa0000 length 0xa0000 00:08:11.565 Nvme1n1 : 5.07 1527.16 5.97 0.00 0.00 83003.97 5659.93 69587.32 00:08:11.565 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:11.565 Verification LBA range: start 0x0 length 0x80000 00:08:11.565 Nvme2n1 : 5.06 1517.89 5.93 0.00 0.00 83666.29 20137.43 83886.08 00:08:11.565 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:11.565 Verification LBA range: start 0x80000 length 0x80000 00:08:11.565 Nvme2n1 : 5.08 1536.89 6.00 0.00 0.00 82393.68 6791.91 74830.20 00:08:11.565 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:11.565 Verification LBA range: start 0x0 length 0x80000 00:08:11.565 Nvme2n2 : 5.06 1517.32 5.93 0.00 0.00 83544.52 18707.55 85315.96 00:08:11.565 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:11.565 Verification LBA range: start 0x80000 length 0x80000 00:08:11.565 Nvme2n2 : 5.08 1536.48 6.00 0.00 0.00 82236.97 6851.49 80549.70 00:08:11.565 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:11.565 Verification LBA range: start 0x0 length 0x80000 00:08:11.565 Nvme2n3 : 5.08 1525.77 5.96 0.00 0.00 82975.21 4408.79 92465.34 00:08:11.565 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:11.565 Verification LBA range: start 0x80000 length 0x80000 00:08:11.565 Nvme2n3 : 5.05 1520.21 5.94 0.00 0.00 83905.40 17754.30 81026.33 00:08:11.565 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:11.565 Verification LBA range: start 0x0 length 0x20000 00:08:11.565 Nvme3n1 : 5.09 1535.02 6.00 0.00 0.00 82388.60 7864.32 95325.09 00:08:11.565 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:11.565 Verification LBA range: start 0x20000 length 0x20000 00:08:11.565 Nvme3n1 : 5.05 1519.73 5.94 0.00 0.00 83729.54 19899.11 74830.20 00:08:11.565 =================================================================================================================== 00:08:11.565 Total : 18293.55 71.46 0.00 0.00 83257.92 4408.79 95325.09 00:08:12.501 00:08:12.501 real 0m7.393s 00:08:12.501 user 0m13.593s 00:08:12.501 sys 0m0.223s 00:08:12.501 15:16:26 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.501 ************************************ 00:08:12.501 END TEST bdev_verify 00:08:12.501 ************************************ 00:08:12.501 15:16:26 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:12.759 15:16:26 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:12.759 15:16:26 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:12.759 15:16:26 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:12.759 15:16:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.759 15:16:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:12.759 ************************************ 00:08:12.759 START TEST bdev_verify_big_io 00:08:12.759 ************************************ 00:08:12.759 15:16:26 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:12.759 [2024-07-11 15:16:26.254238] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:12.759 [2024-07-11 15:16:26.254444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66733 ] 00:08:13.018 [2024-07-11 15:16:26.426972] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:13.018 [2024-07-11 15:16:26.575919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.018 [2024-07-11 15:16:26.575937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.956 Running I/O for 5 seconds... 00:08:20.519 00:08:20.519 Latency(us) 00:08:20.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.519 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:20.519 Verification LBA range: start 0x0 length 0xbd0b 00:08:20.519 Nvme0n1 : 5.70 123.48 7.72 0.00 0.00 999752.70 23116.33 983754.94 00:08:20.519 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:20.519 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:20.519 Nvme0n1 : 5.74 121.54 7.60 0.00 0.00 1013424.45 12809.31 1456567.39 00:08:20.520 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:20.520 Verification LBA range: start 0x0 length 0xa000 00:08:20.520 Nvme1n1 : 5.77 130.22 8.14 0.00 0.00 932308.03 24427.05 857925.82 00:08:20.520 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:20.520 Verification LBA range: start 0xa000 length 0xa000 00:08:20.520 Nvme1n1 : 5.74 120.81 7.55 0.00 0.00 986318.95 29074.15 1479445.41 00:08:20.520 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:20.520 Verification LBA range: start 0x0 length 0x8000 00:08:20.520 Nvme2n1 : 5.77 128.29 8.02 0.00 0.00 915424.96 24188.74 991380.95 00:08:20.520 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:20.520 Verification LBA range: start 0x8000 length 0x8000 00:08:20.520 Nvme2n1 : 5.75 124.07 7.75 0.00 0.00 938411.01 49569.05 1509949.44 00:08:20.520 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:20.520 Verification LBA range: start 0x0 length 0x8000 00:08:20.520 Nvme2n2 : 5.83 132.59 8.29 0.00 0.00 860784.95 24546.21 999006.95 00:08:20.520 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:20.520 Verification LBA range: start 0x8000 length 0x8000 00:08:20.520 Nvme2n2 : 5.82 128.43 8.03 0.00 0.00 879451.38 71970.44 1540453.47 00:08:20.520 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:20.520 Verification LBA range: start 0x0 length 0x8000 00:08:20.520 Nvme2n3 : 5.83 131.70 8.23 0.00 0.00 840111.48 39083.29 1121023.07 00:08:20.520 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:20.520 Verification LBA range: start 0x8000 length 0x8000 00:08:20.520 Nvme2n3 : 5.90 139.41 8.71 0.00 0.00 788843.39 17515.99 1570957.50 00:08:20.520 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:20.520 Verification LBA range: start 0x0 length 0x2000 00:08:20.520 Nvme3n1 : 5.88 153.16 9.57 0.00 0.00 708407.06 1891.61 976128.93 00:08:20.520 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:20.520 Verification LBA range: start 0x2000 length 0x2000 00:08:20.520 Nvme3n1 : 5.92 160.26 10.02 0.00 0.00 668511.19 629.29 1593835.52 00:08:20.520 =================================================================================================================== 00:08:20.520 Total : 1593.96 99.62 0.00 0.00 867321.81 629.29 1593835.52 00:08:21.087 00:08:21.087 real 0m8.543s 00:08:21.087 user 0m15.820s 00:08:21.087 sys 0m0.285s 00:08:21.087 15:16:34 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.087 15:16:34 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:21.087 ************************************ 00:08:21.087 END TEST bdev_verify_big_io 00:08:21.087 ************************************ 00:08:21.345 15:16:34 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:21.346 15:16:34 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:21.346 15:16:34 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:21.346 15:16:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.346 15:16:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:21.346 ************************************ 00:08:21.346 START TEST bdev_write_zeroes 00:08:21.346 ************************************ 00:08:21.346 15:16:34 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:21.346 [2024-07-11 15:16:34.835170] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:21.346 [2024-07-11 15:16:34.835332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66837 ] 00:08:21.604 [2024-07-11 15:16:34.993233] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.604 [2024-07-11 15:16:35.142388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.170 Running I/O for 1 seconds... 00:08:23.557 00:08:23.557 Latency(us) 00:08:23.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.557 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:23.557 Nvme0n1 : 1.02 9809.85 38.32 0.00 0.00 13015.32 9353.77 22163.08 00:08:23.557 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:23.557 Nvme1n1 : 1.02 9795.07 38.26 0.00 0.00 13014.03 10128.29 22282.24 00:08:23.557 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:23.557 Nvme2n1 : 1.02 9780.29 38.20 0.00 0.00 12992.38 9949.56 20375.74 00:08:23.557 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:23.557 Nvme2n2 : 1.02 9765.78 38.15 0.00 0.00 12945.76 9770.82 16920.20 00:08:23.557 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:23.557 Nvme2n3 : 1.02 9751.30 38.09 0.00 0.00 12939.05 9234.62 16562.73 00:08:23.557 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:23.557 Nvme3n1 : 1.03 9736.63 38.03 0.00 0.00 12929.66 7685.59 16801.05 00:08:23.558 =================================================================================================================== 00:08:23.558 Total : 58638.91 229.06 0.00 0.00 12972.70 7685.59 22282.24 00:08:24.490 00:08:24.490 real 0m3.186s 00:08:24.490 user 0m2.878s 00:08:24.490 sys 0m0.188s 00:08:24.490 15:16:37 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.490 15:16:37 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:24.490 ************************************ 00:08:24.490 END TEST bdev_write_zeroes 00:08:24.490 ************************************ 00:08:24.490 15:16:37 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:24.490 15:16:37 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:24.490 15:16:37 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:24.490 15:16:37 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.490 15:16:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.490 ************************************ 00:08:24.490 START TEST bdev_json_nonenclosed 00:08:24.490 ************************************ 00:08:24.490 15:16:37 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:24.490 [2024-07-11 15:16:38.078643] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:24.490 [2024-07-11 15:16:38.078802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66890 ] 00:08:24.747 [2024-07-11 15:16:38.228082] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.005 [2024-07-11 15:16:38.382237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.005 [2024-07-11 15:16:38.382373] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:25.005 [2024-07-11 15:16:38.382426] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:25.005 [2024-07-11 15:16:38.382442] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:25.266 00:08:25.266 real 0m0.746s 00:08:25.266 user 0m0.532s 00:08:25.266 sys 0m0.110s 00:08:25.266 15:16:38 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:08:25.266 15:16:38 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.266 15:16:38 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:25.266 ************************************ 00:08:25.266 END TEST bdev_json_nonenclosed 00:08:25.266 ************************************ 00:08:25.266 15:16:38 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:08:25.266 15:16:38 blockdev_nvme -- bdev/blockdev.sh@782 -- # true 00:08:25.266 15:16:38 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:25.266 15:16:38 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:25.266 15:16:38 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.266 15:16:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.266 ************************************ 00:08:25.266 START TEST bdev_json_nonarray 00:08:25.266 ************************************ 00:08:25.266 15:16:38 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:25.266 [2024-07-11 15:16:38.877411] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:25.267 [2024-07-11 15:16:38.877625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66921 ] 00:08:25.556 [2024-07-11 15:16:39.036773] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.816 [2024-07-11 15:16:39.202567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.816 [2024-07-11 15:16:39.202707] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:25.816 [2024-07-11 15:16:39.202732] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:25.816 [2024-07-11 15:16:39.202747] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.074 00:08:26.074 real 0m0.760s 00:08:26.074 user 0m0.548s 00:08:26.074 sys 0m0.106s 00:08:26.074 15:16:39 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:08:26.074 15:16:39 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.074 15:16:39 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:26.074 ************************************ 00:08:26.074 END TEST bdev_json_nonarray 00:08:26.074 ************************************ 00:08:26.074 15:16:39 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:08:26.074 15:16:39 blockdev_nvme -- bdev/blockdev.sh@785 -- # true 00:08:26.074 15:16:39 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:08:26.074 15:16:39 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:08:26.074 15:16:39 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:08:26.074 15:16:39 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:08:26.074 15:16:39 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:08:26.074 15:16:39 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:26.074 15:16:39 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:26.074 15:16:39 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:26.074 15:16:39 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:26.074 15:16:39 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:26.074 15:16:39 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:26.074 00:08:26.074 real 0m41.172s 00:08:26.074 user 1m2.061s 00:08:26.074 sys 0m6.184s 00:08:26.074 15:16:39 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.074 15:16:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.074 ************************************ 00:08:26.074 END TEST blockdev_nvme 00:08:26.074 ************************************ 00:08:26.074 15:16:39 -- common/autotest_common.sh@1142 -- # return 0 00:08:26.074 15:16:39 -- spdk/autotest.sh@213 -- # uname -s 00:08:26.074 15:16:39 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:08:26.074 15:16:39 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:26.074 15:16:39 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:26.074 15:16:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.074 15:16:39 -- common/autotest_common.sh@10 -- # set +x 00:08:26.074 ************************************ 00:08:26.074 START TEST blockdev_nvme_gpt 00:08:26.074 ************************************ 00:08:26.074 15:16:39 blockdev_nvme_gpt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:26.333 * Looking for test storage... 00:08:26.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66997 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 66997 00:08:26.333 15:16:39 blockdev_nvme_gpt -- common/autotest_common.sh@829 -- # '[' -z 66997 ']' 00:08:26.333 15:16:39 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.333 15:16:39 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:26.333 15:16:39 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:26.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.333 15:16:39 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.333 15:16:39 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:26.333 15:16:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:26.333 [2024-07-11 15:16:39.888764] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:26.333 [2024-07-11 15:16:39.888973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66997 ] 00:08:26.592 [2024-07-11 15:16:40.061332] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.851 [2024-07-11 15:16:40.226137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.418 15:16:40 blockdev_nvme_gpt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:27.418 15:16:40 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # return 0 00:08:27.418 15:16:40 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:08:27.418 15:16:40 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:08:27.418 15:16:40 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:27.676 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:27.934 Waiting for block devices as requested 00:08:27.934 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:27.934 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:28.209 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:28.209 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:33.487 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:33.487 15:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:33.487 15:16:46 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:33.487 15:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:08:33.487 15:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:08:33.487 15:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:08:33.488 15:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:33.488 15:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:08:33.488 15:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:08:33.488 15:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:08:33.488 15:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:08:33.488 BYT; 00:08:33.488 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:33.488 15:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:08:33.488 BYT; 00:08:33.488 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:33.488 15:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:08:33.488 15:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:08:33.488 15:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:08:33.488 15:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:33.488 15:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:33.488 15:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:33.488 15:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:08:33.488 15:16:46 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:08:33.488 15:16:46 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:33.488 15:16:46 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:33.488 15:16:46 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:08:33.488 15:16:46 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:08:33.488 15:16:46 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:33.488 15:16:46 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:33.488 15:16:46 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:33.488 15:16:46 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:33.488 15:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:33.488 15:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:08:33.488 15:16:46 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:08:33.488 15:16:46 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:33.488 15:16:46 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:33.488 15:16:46 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:08:33.488 15:16:46 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:08:33.488 15:16:46 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:33.488 15:16:46 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:33.488 15:16:46 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:33.488 15:16:46 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:33.488 15:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:33.488 15:16:46 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:08:34.422 The operation has completed successfully. 00:08:34.422 15:16:47 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:08:35.357 The operation has completed successfully. 00:08:35.357 15:16:48 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:35.923 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:36.490 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:36.490 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:36.490 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:36.748 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:36.748 15:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:08:36.748 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.748 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:36.748 [] 00:08:36.748 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.748 15:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:08:36.748 15:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:36.748 15:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:36.748 15:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:36.748 15:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:36.748 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.748 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.007 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.007 15:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:08:37.007 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.007 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.007 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.007 15:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:08:37.007 15:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:08:37.007 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.007 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.007 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.007 15:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:08:37.007 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.007 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.007 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.007 15:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:37.007 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.007 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.007 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.007 15:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:08:37.267 15:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:08:37.267 15:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:08:37.267 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.267 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.267 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.267 15:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:08:37.267 15:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:08:37.268 15:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "f079b017-858f-472e-9072-c5ccddded296"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f079b017-858f-472e-9072-c5ccddded296",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "0b8579e2-4b80-4bc4-a5dc-49a36471f717"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0b8579e2-4b80-4bc4-a5dc-49a36471f717",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "cd1d22a2-e30d-4141-9880-6aaaa6922003"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cd1d22a2-e30d-4141-9880-6aaaa6922003",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "fee1bc32-4705-4204-b279-99ef5d7073c7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fee1bc32-4705-4204-b279-99ef5d7073c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "62ef469b-bf6c-4249-8ff3-e96ed117aa05"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "62ef469b-bf6c-4249-8ff3-e96ed117aa05",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:37.268 15:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:08:37.268 15:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:08:37.268 15:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:08:37.268 15:16:50 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 66997 00:08:37.268 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@948 -- # '[' -z 66997 ']' 00:08:37.268 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # kill -0 66997 00:08:37.268 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # uname 00:08:37.268 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:37.268 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66997 00:08:37.268 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:37.268 killing process with pid 66997 00:08:37.268 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:37.268 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66997' 00:08:37.268 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # kill 66997 00:08:37.268 15:16:50 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # wait 66997 00:08:39.169 15:16:52 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:39.169 15:16:52 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:39.169 15:16:52 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:39.169 15:16:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.169 15:16:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:39.169 ************************************ 00:08:39.169 START TEST bdev_hello_world 00:08:39.169 ************************************ 00:08:39.169 15:16:52 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:39.169 [2024-07-11 15:16:52.697240] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:39.169 [2024-07-11 15:16:52.697406] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67630 ] 00:08:39.427 [2024-07-11 15:16:52.868753] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.427 [2024-07-11 15:16:53.025762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.993 [2024-07-11 15:16:53.598324] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:39.993 [2024-07-11 15:16:53.598420] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:08:39.993 [2024-07-11 15:16:53.598466] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:39.993 [2024-07-11 15:16:53.601405] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:39.993 [2024-07-11 15:16:53.602014] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:39.993 [2024-07-11 15:16:53.602073] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:39.993 [2024-07-11 15:16:53.602341] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:39.993 00:08:39.993 [2024-07-11 15:16:53.602385] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:41.431 00:08:41.431 real 0m1.982s 00:08:41.431 user 0m1.672s 00:08:41.431 sys 0m0.201s 00:08:41.431 15:16:54 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.431 15:16:54 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:41.431 ************************************ 00:08:41.431 END TEST bdev_hello_world 00:08:41.431 ************************************ 00:08:41.431 15:16:54 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:08:41.431 15:16:54 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:08:41.431 15:16:54 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:41.431 15:16:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.431 15:16:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:41.431 ************************************ 00:08:41.431 START TEST bdev_bounds 00:08:41.431 ************************************ 00:08:41.431 15:16:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:08:41.431 15:16:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=67672 00:08:41.431 15:16:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:41.431 Process bdevio pid: 67672 00:08:41.431 15:16:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 67672' 00:08:41.431 15:16:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:41.431 15:16:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 67672 00:08:41.431 15:16:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 67672 ']' 00:08:41.432 15:16:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.432 15:16:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.432 15:16:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.432 15:16:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.432 15:16:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:41.432 [2024-07-11 15:16:54.743129] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:41.432 [2024-07-11 15:16:54.743299] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67672 ] 00:08:41.432 [2024-07-11 15:16:54.919093] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:41.691 [2024-07-11 15:16:55.084839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.691 [2024-07-11 15:16:55.084947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.691 [2024-07-11 15:16:55.084947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.259 15:16:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:42.259 15:16:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:08:42.259 15:16:55 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:42.259 I/O targets: 00:08:42.259 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:08:42.259 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:08:42.259 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:42.259 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:42.259 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:42.259 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:42.259 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:42.259 00:08:42.259 00:08:42.259 CUnit - A unit testing framework for C - Version 2.1-3 00:08:42.259 http://cunit.sourceforge.net/ 00:08:42.259 00:08:42.259 00:08:42.259 Suite: bdevio tests on: Nvme3n1 00:08:42.259 Test: blockdev write read block ...passed 00:08:42.259 Test: blockdev write zeroes read block ...passed 00:08:42.259 Test: blockdev write zeroes read no split ...passed 00:08:42.517 Test: blockdev write zeroes read split ...passed 00:08:42.517 Test: blockdev write zeroes read split partial ...passed 00:08:42.517 Test: blockdev reset ...[2024-07-11 15:16:55.920468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:42.517 [2024-07-11 15:16:55.924520] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:42.517 passed 00:08:42.517 Test: blockdev write read 8 blocks ...passed 00:08:42.517 Test: blockdev write read size > 128k ...passed 00:08:42.517 Test: blockdev write read invalid size ...passed 00:08:42.517 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:42.517 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:42.517 Test: blockdev write read max offset ...passed 00:08:42.518 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:42.518 Test: blockdev writev readv 8 blocks ...passed 00:08:42.518 Test: blockdev writev readv 30 x 1block ...passed 00:08:42.518 Test: blockdev writev readv block ...passed 00:08:42.518 Test: blockdev writev readv size > 128k ...passed 00:08:42.518 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:42.518 Test: blockdev comparev and writev ...[2024-07-11 15:16:55.933443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x293204000 len:0x1000 00:08:42.518 [2024-07-11 15:16:55.933517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:42.518 passed 00:08:42.518 Test: blockdev nvme passthru rw ...passed 00:08:42.518 Test: blockdev nvme passthru vendor specific ...[2024-07-11 15:16:55.934504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:42.518 [2024-07-11 15:16:55.934565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:42.518 passed 00:08:42.518 Test: blockdev nvme admin passthru ...passed 00:08:42.518 Test: blockdev copy ...passed 00:08:42.518 Suite: bdevio tests on: Nvme2n3 00:08:42.518 Test: blockdev write read block ...passed 00:08:42.518 Test: blockdev write zeroes read block ...passed 00:08:42.518 Test: blockdev write zeroes read no split ...passed 00:08:42.518 Test: blockdev write zeroes read split ...passed 00:08:42.518 Test: blockdev write zeroes read split partial ...passed 00:08:42.518 Test: blockdev reset ...[2024-07-11 15:16:55.993648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:42.518 [2024-07-11 15:16:55.997740] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:42.518 passed 00:08:42.518 Test: blockdev write read 8 blocks ...passed 00:08:42.518 Test: blockdev write read size > 128k ...passed 00:08:42.518 Test: blockdev write read invalid size ...passed 00:08:42.518 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:42.518 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:42.518 Test: blockdev write read max offset ...passed 00:08:42.518 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:42.518 Test: blockdev writev readv 8 blocks ...passed 00:08:42.518 Test: blockdev writev readv 30 x 1block ...passed 00:08:42.518 Test: blockdev writev readv block ...passed 00:08:42.518 Test: blockdev writev readv size > 128k ...passed 00:08:42.518 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:42.518 Test: blockdev comparev and writev ...[2024-07-11 15:16:56.006071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27ca3a000 len:0x1000 00:08:42.518 [2024-07-11 15:16:56.006187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:42.518 passed 00:08:42.518 Test: blockdev nvme passthru rw ...passed 00:08:42.518 Test: blockdev nvme passthru vendor specific ...[2024-07-11 15:16:56.007218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:42.518 passed 00:08:42.518 Test: blockdev nvme admin passthru ...[2024-07-11 15:16:56.007262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:42.518 passed 00:08:42.518 Test: blockdev copy ...passed 00:08:42.518 Suite: bdevio tests on: Nvme2n2 00:08:42.518 Test: blockdev write read block ...passed 00:08:42.518 Test: blockdev write zeroes read block ...passed 00:08:42.518 Test: blockdev write zeroes read no split ...passed 00:08:42.518 Test: blockdev write zeroes read split ...passed 00:08:42.518 Test: blockdev write zeroes read split partial ...passed 00:08:42.518 Test: blockdev reset ...[2024-07-11 15:16:56.073072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:42.518 [2024-07-11 15:16:56.077676] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:42.518 passed 00:08:42.518 Test: blockdev write read 8 blocks ...passed 00:08:42.518 Test: blockdev write read size > 128k ...passed 00:08:42.518 Test: blockdev write read invalid size ...passed 00:08:42.518 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:42.518 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:42.518 Test: blockdev write read max offset ...passed 00:08:42.518 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:42.518 Test: blockdev writev readv 8 blocks ...passed 00:08:42.518 Test: blockdev writev readv 30 x 1block ...passed 00:08:42.518 Test: blockdev writev readv block ...passed 00:08:42.518 Test: blockdev writev readv size > 128k ...passed 00:08:42.518 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:42.518 Test: blockdev comparev and writev ...[2024-07-11 15:16:56.086608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27ca36000 len:0x1000 00:08:42.518 [2024-07-11 15:16:56.086668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:42.518 passed 00:08:42.518 Test: blockdev nvme passthru rw ...passed 00:08:42.518 Test: blockdev nvme passthru vendor specific ...[2024-07-11 15:16:56.087643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:42.518 [2024-07-11 15:16:56.087687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:42.518 passed 00:08:42.518 Test: blockdev nvme admin passthru ...passed 00:08:42.518 Test: blockdev copy ...passed 00:08:42.518 Suite: bdevio tests on: Nvme2n1 00:08:42.518 Test: blockdev write read block ...passed 00:08:42.518 Test: blockdev write zeroes read block ...passed 00:08:42.518 Test: blockdev write zeroes read no split ...passed 00:08:42.518 Test: blockdev write zeroes read split ...passed 00:08:42.777 Test: blockdev write zeroes read split partial ...passed 00:08:42.777 Test: blockdev reset ...[2024-07-11 15:16:56.150641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:42.777 [2024-07-11 15:16:56.154946] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:42.777 passed 00:08:42.777 Test: blockdev write read 8 blocks ...passed 00:08:42.777 Test: blockdev write read size > 128k ...passed 00:08:42.777 Test: blockdev write read invalid size ...passed 00:08:42.777 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:42.777 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:42.777 Test: blockdev write read max offset ...passed 00:08:42.777 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:42.777 Test: blockdev writev readv 8 blocks ...passed 00:08:42.777 Test: blockdev writev readv 30 x 1block ...passed 00:08:42.777 Test: blockdev writev readv block ...passed 00:08:42.777 Test: blockdev writev readv size > 128k ...passed 00:08:42.777 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:42.777 Test: blockdev comparev and writev ...[2024-07-11 15:16:56.163333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27ca30000 len:0x1000 00:08:42.777 [2024-07-11 15:16:56.163422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:42.777 passed 00:08:42.777 Test: blockdev nvme passthru rw ...passed 00:08:42.777 Test: blockdev nvme passthru vendor specific ...[2024-07-11 15:16:56.164331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:42.777 [2024-07-11 15:16:56.164406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:42.777 passed 00:08:42.777 Test: blockdev nvme admin passthru ...passed 00:08:42.777 Test: blockdev copy ...passed 00:08:42.777 Suite: bdevio tests on: Nvme1n1 00:08:42.777 Test: blockdev write read block ...passed 00:08:42.777 Test: blockdev write zeroes read block ...passed 00:08:42.777 Test: blockdev write zeroes read no split ...passed 00:08:42.777 Test: blockdev write zeroes read split ...passed 00:08:42.777 Test: blockdev write zeroes read split partial ...passed 00:08:42.777 Test: blockdev reset ...[2024-07-11 15:16:56.225696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:42.777 [2024-07-11 15:16:56.229312] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:42.777 passed 00:08:42.777 Test: blockdev write read 8 blocks ...passed 00:08:42.777 Test: blockdev write read size > 128k ...passed 00:08:42.777 Test: blockdev write read invalid size ...passed 00:08:42.777 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:42.777 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:42.777 Test: blockdev write read max offset ...passed 00:08:42.777 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:42.777 Test: blockdev writev readv 8 blocks ...passed 00:08:42.777 Test: blockdev writev readv 30 x 1block ...passed 00:08:42.777 Test: blockdev writev readv block ...passed 00:08:42.777 Test: blockdev writev readv size > 128k ...passed 00:08:42.777 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:42.777 Test: blockdev comparev and writev ...[2024-07-11 15:16:56.237829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26820e000 len:0x1000 00:08:42.777 [2024-07-11 15:16:56.237885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:42.777 passed 00:08:42.777 Test: blockdev nvme passthru rw ...passed 00:08:42.777 Test: blockdev nvme passthru vendor specific ...passed 00:08:42.777 Test: blockdev nvme admin passthru ...[2024-07-11 15:16:56.238880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:42.777 [2024-07-11 15:16:56.238937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:42.777 passed 00:08:42.777 Test: blockdev copy ...passed 00:08:42.777 Suite: bdevio tests on: Nvme0n1p2 00:08:42.777 Test: blockdev write read block ...passed 00:08:42.777 Test: blockdev write zeroes read block ...passed 00:08:42.777 Test: blockdev write zeroes read no split ...passed 00:08:42.777 Test: blockdev write zeroes read split ...passed 00:08:42.777 Test: blockdev write zeroes read split partial ...passed 00:08:42.777 Test: blockdev reset ...[2024-07-11 15:16:56.300997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:42.777 [2024-07-11 15:16:56.304586] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:42.777 passed 00:08:42.777 Test: blockdev write read 8 blocks ...passed 00:08:42.777 Test: blockdev write read size > 128k ...passed 00:08:42.777 Test: blockdev write read invalid size ...passed 00:08:42.777 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:42.777 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:42.777 Test: blockdev write read max offset ...passed 00:08:42.777 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:42.777 Test: blockdev writev readv 8 blocks ...passed 00:08:42.777 Test: blockdev writev readv 30 x 1block ...passed 00:08:42.777 Test: blockdev writev readv block ...passed 00:08:42.777 Test: blockdev writev readv size > 128k ...passed 00:08:42.777 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:42.777 Test: blockdev comparev and writev ...passed 00:08:42.777 Test: blockdev nvme passthru rw ...passed 00:08:42.777 Test: blockdev nvme passthru vendor specific ...passed 00:08:42.777 Test: blockdev nvme admin passthru ...passed 00:08:42.777 Test: blockdev copy ...[2024-07-11 15:16:56.312150] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:08:42.777 separate metadata which is not supported yet. 00:08:42.777 passed 00:08:42.777 Suite: bdevio tests on: Nvme0n1p1 00:08:42.777 Test: blockdev write read block ...passed 00:08:42.777 Test: blockdev write zeroes read block ...passed 00:08:42.777 Test: blockdev write zeroes read no split ...passed 00:08:42.777 Test: blockdev write zeroes read split ...passed 00:08:42.777 Test: blockdev write zeroes read split partial ...passed 00:08:42.777 Test: blockdev reset ...[2024-07-11 15:16:56.361077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:42.778 [2024-07-11 15:16:56.364573] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:42.778 passed 00:08:42.778 Test: blockdev write read 8 blocks ...passed 00:08:42.778 Test: blockdev write read size > 128k ...passed 00:08:42.778 Test: blockdev write read invalid size ...passed 00:08:42.778 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:42.778 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:42.778 Test: blockdev write read max offset ...passed 00:08:42.778 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:42.778 Test: blockdev writev readv 8 blocks ...passed 00:08:42.778 Test: blockdev writev readv 30 x 1block ...passed 00:08:42.778 Test: blockdev writev readv block ...passed 00:08:42.778 Test: blockdev writev readv size > 128k ...passed 00:08:42.778 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:42.778 Test: blockdev comparev and writev ...passed 00:08:42.778 Test: blockdev nvme passthru rw ...passed 00:08:42.778 Test: blockdev nvme passthru vendor specific ...passed 00:08:42.778 Test: blockdev nvme admin passthru ...passed 00:08:42.778 Test: blockdev copy ...[2024-07-11 15:16:56.372144] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:08:42.778 separate metadata which is not supported yet. 00:08:42.778 passed 00:08:42.778 00:08:42.778 Run Summary: Type Total Ran Passed Failed Inactive 00:08:42.778 suites 7 7 n/a 0 0 00:08:42.778 tests 161 161 161 0 0 00:08:42.778 asserts 1006 1006 1006 0 n/a 00:08:42.778 00:08:42.778 Elapsed time = 1.377 seconds 00:08:42.778 0 00:08:43.036 15:16:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 67672 00:08:43.036 15:16:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 67672 ']' 00:08:43.036 15:16:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 67672 00:08:43.036 15:16:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:08:43.036 15:16:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:43.036 15:16:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67672 00:08:43.036 15:16:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:43.036 15:16:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:43.036 killing process with pid 67672 00:08:43.036 15:16:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67672' 00:08:43.036 15:16:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # kill 67672 00:08:43.036 15:16:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # wait 67672 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:08:43.972 00:08:43.972 real 0m2.632s 00:08:43.972 user 0m6.535s 00:08:43.972 sys 0m0.359s 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:43.972 ************************************ 00:08:43.972 END TEST bdev_bounds 00:08:43.972 ************************************ 00:08:43.972 15:16:57 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:08:43.972 15:16:57 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:43.972 15:16:57 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:43.972 15:16:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.972 15:16:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:43.972 ************************************ 00:08:43.972 START TEST bdev_nbd 00:08:43.972 ************************************ 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=7 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=67732 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 67732 /var/tmp/spdk-nbd.sock 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 67732 ']' 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.972 15:16:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:43.972 [2024-07-11 15:16:57.434045] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:43.972 [2024-07-11 15:16:57.434264] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.231 [2024-07-11 15:16:57.608243] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.231 [2024-07-11 15:16:57.761604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.796 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:44.796 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:08:44.796 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:44.796 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.796 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:44.796 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:44.796 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:44.796 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.796 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:44.796 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:44.796 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:44.796 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:44.796 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:44.796 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:44.796 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:08:45.053 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:45.053 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:45.053 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:45.053 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:45.053 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:45.053 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:45.053 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:45.053 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:45.053 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:45.053 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:45.053 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:45.053 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.053 1+0 records in 00:08:45.053 1+0 records out 00:08:45.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701655 s, 5.8 MB/s 00:08:45.053 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.053 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:45.053 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.053 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:45.053 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:45.053 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:45.053 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:45.054 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:08:45.311 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:45.311 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:45.311 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:45.311 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:45.311 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:45.311 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:45.311 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:45.311 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:45.311 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:45.311 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:45.311 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:45.311 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.311 1+0 records in 00:08:45.311 1+0 records out 00:08:45.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526295 s, 7.8 MB/s 00:08:45.311 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.311 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:45.311 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.311 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:45.311 15:16:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:45.311 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:45.311 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:45.311 15:16:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:45.879 15:16:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:45.879 15:16:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:45.879 15:16:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:45.879 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:45.879 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:45.879 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:45.879 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:45.879 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:45.879 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:45.879 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:45.879 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:45.879 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:45.879 1+0 records in 00:08:45.879 1+0 records out 00:08:45.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000965756 s, 4.2 MB/s 00:08:45.879 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.879 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:45.879 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:45.879 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:45.879 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:45.879 15:16:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:45.879 15:16:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:45.879 15:16:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:46.138 15:16:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:46.138 15:16:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:46.138 15:16:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:46.138 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:46.138 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:46.138 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:46.138 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:46.138 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:46.138 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:46.138 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:46.138 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:46.138 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.138 1+0 records in 00:08:46.138 1+0 records out 00:08:46.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651707 s, 6.3 MB/s 00:08:46.138 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.138 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:46.138 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.138 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:46.138 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:46.138 15:16:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:46.138 15:16:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:46.138 15:16:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:46.397 15:16:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:46.397 15:16:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:46.397 15:16:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:46.397 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:46.397 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:46.397 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:46.397 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:46.397 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:46.397 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:46.397 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:46.397 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:46.397 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.397 1+0 records in 00:08:46.397 1+0 records out 00:08:46.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000797604 s, 5.1 MB/s 00:08:46.397 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.397 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:46.397 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.397 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:46.397 15:16:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:46.397 15:16:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:46.397 15:16:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:46.397 15:16:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:46.666 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:46.666 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:46.666 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:46.666 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:46.666 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:46.666 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:46.666 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:46.666 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:46.666 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:46.666 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:46.666 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:46.666 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.666 1+0 records in 00:08:46.666 1+0 records out 00:08:46.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000740706 s, 5.5 MB/s 00:08:46.666 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.666 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:46.666 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.666 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:46.666 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:46.666 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:46.666 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:46.666 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:46.929 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:46.929 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:46.929 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:46.929 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:08:46.929 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:46.929 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:46.929 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:46.929 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:08:46.929 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:46.929 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:46.929 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:46.929 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.929 1+0 records in 00:08:46.929 1+0 records out 00:08:46.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000846187 s, 4.8 MB/s 00:08:46.929 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.929 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:46.929 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.929 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:46.929 15:17:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:46.929 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:46.929 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:46.929 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:47.187 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:47.187 { 00:08:47.187 "nbd_device": "/dev/nbd0", 00:08:47.187 "bdev_name": "Nvme0n1p1" 00:08:47.187 }, 00:08:47.187 { 00:08:47.187 "nbd_device": "/dev/nbd1", 00:08:47.187 "bdev_name": "Nvme0n1p2" 00:08:47.187 }, 00:08:47.187 { 00:08:47.187 "nbd_device": "/dev/nbd2", 00:08:47.187 "bdev_name": "Nvme1n1" 00:08:47.187 }, 00:08:47.187 { 00:08:47.187 "nbd_device": "/dev/nbd3", 00:08:47.187 "bdev_name": "Nvme2n1" 00:08:47.187 }, 00:08:47.187 { 00:08:47.187 "nbd_device": "/dev/nbd4", 00:08:47.187 "bdev_name": "Nvme2n2" 00:08:47.187 }, 00:08:47.187 { 00:08:47.187 "nbd_device": "/dev/nbd5", 00:08:47.187 "bdev_name": "Nvme2n3" 00:08:47.187 }, 00:08:47.187 { 00:08:47.187 "nbd_device": "/dev/nbd6", 00:08:47.187 "bdev_name": "Nvme3n1" 00:08:47.187 } 00:08:47.187 ]' 00:08:47.187 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:47.187 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:47.187 { 00:08:47.187 "nbd_device": "/dev/nbd0", 00:08:47.187 "bdev_name": "Nvme0n1p1" 00:08:47.187 }, 00:08:47.187 { 00:08:47.187 "nbd_device": "/dev/nbd1", 00:08:47.187 "bdev_name": "Nvme0n1p2" 00:08:47.187 }, 00:08:47.187 { 00:08:47.187 "nbd_device": "/dev/nbd2", 00:08:47.187 "bdev_name": "Nvme1n1" 00:08:47.187 }, 00:08:47.187 { 00:08:47.187 "nbd_device": "/dev/nbd3", 00:08:47.187 "bdev_name": "Nvme2n1" 00:08:47.187 }, 00:08:47.187 { 00:08:47.187 "nbd_device": "/dev/nbd4", 00:08:47.187 "bdev_name": "Nvme2n2" 00:08:47.187 }, 00:08:47.187 { 00:08:47.187 "nbd_device": "/dev/nbd5", 00:08:47.187 "bdev_name": "Nvme2n3" 00:08:47.187 }, 00:08:47.187 { 00:08:47.187 "nbd_device": "/dev/nbd6", 00:08:47.187 "bdev_name": "Nvme3n1" 00:08:47.187 } 00:08:47.187 ]' 00:08:47.187 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:47.187 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:47.187 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.187 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:47.187 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:47.187 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:47.187 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.187 15:17:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:47.446 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:47.446 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:47.446 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:47.446 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.446 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.446 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:47.446 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:47.446 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.446 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.446 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:47.704 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:47.704 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:47.704 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:47.704 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.704 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.704 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:47.704 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:47.704 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.704 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.704 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:47.962 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:47.962 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:47.962 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:47.962 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:47.962 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:47.962 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:47.962 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:47.962 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:47.962 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:47.962 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:48.221 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:48.221 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:48.221 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:48.221 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.221 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.221 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:48.221 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:48.221 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.221 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.221 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:48.479 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:48.479 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:48.479 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:48.479 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.479 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.479 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:48.479 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:48.479 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.479 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.479 15:17:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:48.738 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:48.738 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:48.738 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:48.738 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.738 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.738 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:48.738 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:48.738 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.738 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.738 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:48.996 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:48.996 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:48.996 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:48.996 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.996 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.996 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:48.996 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:48.996 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.996 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:48.996 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.996 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:49.256 15:17:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:08:49.515 /dev/nbd0 00:08:49.515 15:17:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:49.515 15:17:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:49.515 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:49.515 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:49.515 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:49.515 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:49.515 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:49.515 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:49.515 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:49.515 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:49.515 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.515 1+0 records in 00:08:49.515 1+0 records out 00:08:49.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588364 s, 7.0 MB/s 00:08:49.515 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.515 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:49.515 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.515 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:49.515 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:49.515 15:17:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:49.515 15:17:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:49.515 15:17:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:08:49.772 /dev/nbd1 00:08:49.772 15:17:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:49.772 15:17:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:49.772 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:49.772 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:49.772 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:49.772 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:49.772 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:49.772 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:49.772 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:49.772 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:49.772 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.772 1+0 records in 00:08:49.772 1+0 records out 00:08:49.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000625719 s, 6.5 MB/s 00:08:49.773 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.773 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:49.773 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.773 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:49.773 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:49.773 15:17:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:49.773 15:17:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:49.773 15:17:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:08:50.030 /dev/nbd10 00:08:50.030 15:17:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:50.030 15:17:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:50.030 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:50.030 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:50.030 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:50.030 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:50.030 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:50.030 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:50.030 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:50.030 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:50.030 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:50.030 1+0 records in 00:08:50.030 1+0 records out 00:08:50.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586116 s, 7.0 MB/s 00:08:50.030 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.030 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:50.030 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.030 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:50.030 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:50.030 15:17:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:50.030 15:17:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:50.030 15:17:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:50.287 /dev/nbd11 00:08:50.287 15:17:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:50.287 15:17:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:50.287 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:50.287 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:50.287 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:50.287 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:50.287 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:50.287 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:50.287 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:50.287 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:50.287 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:50.287 1+0 records in 00:08:50.287 1+0 records out 00:08:50.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633243 s, 6.5 MB/s 00:08:50.287 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.287 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:50.287 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.287 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:50.287 15:17:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:50.287 15:17:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:50.287 15:17:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:50.287 15:17:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:50.546 /dev/nbd12 00:08:50.546 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:50.546 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:50.546 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:50.546 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:50.546 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:50.546 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:50.546 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:50.546 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:50.546 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:50.546 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:50.546 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:50.546 1+0 records in 00:08:50.546 1+0 records out 00:08:50.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000955536 s, 4.3 MB/s 00:08:50.546 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.546 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:50.546 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.546 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:50.546 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:50.546 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:50.546 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:50.546 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:50.805 /dev/nbd13 00:08:50.805 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:50.805 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:50.805 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:50.805 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:50.805 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:50.805 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:50.805 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:50.805 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:50.805 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:50.805 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:50.805 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:50.805 1+0 records in 00:08:50.805 1+0 records out 00:08:50.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000908973 s, 4.5 MB/s 00:08:50.805 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.805 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:50.805 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.805 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:50.805 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:50.805 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:50.805 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:50.805 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:51.065 /dev/nbd14 00:08:51.065 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:51.065 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:51.065 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:08:51.065 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:51.065 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:51.065 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:51.065 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:08:51.065 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:51.065 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:51.065 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:51.065 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:51.065 1+0 records in 00:08:51.065 1+0 records out 00:08:51.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000885779 s, 4.6 MB/s 00:08:51.065 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.065 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:51.065 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.065 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:51.065 15:17:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:51.065 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:51.065 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:51.065 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:51.065 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.065 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:51.324 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:51.324 { 00:08:51.324 "nbd_device": "/dev/nbd0", 00:08:51.324 "bdev_name": "Nvme0n1p1" 00:08:51.324 }, 00:08:51.324 { 00:08:51.324 "nbd_device": "/dev/nbd1", 00:08:51.324 "bdev_name": "Nvme0n1p2" 00:08:51.324 }, 00:08:51.324 { 00:08:51.324 "nbd_device": "/dev/nbd10", 00:08:51.324 "bdev_name": "Nvme1n1" 00:08:51.324 }, 00:08:51.324 { 00:08:51.324 "nbd_device": "/dev/nbd11", 00:08:51.324 "bdev_name": "Nvme2n1" 00:08:51.324 }, 00:08:51.324 { 00:08:51.324 "nbd_device": "/dev/nbd12", 00:08:51.324 "bdev_name": "Nvme2n2" 00:08:51.324 }, 00:08:51.324 { 00:08:51.324 "nbd_device": "/dev/nbd13", 00:08:51.324 "bdev_name": "Nvme2n3" 00:08:51.324 }, 00:08:51.324 { 00:08:51.324 "nbd_device": "/dev/nbd14", 00:08:51.324 "bdev_name": "Nvme3n1" 00:08:51.324 } 00:08:51.324 ]' 00:08:51.324 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:51.324 { 00:08:51.324 "nbd_device": "/dev/nbd0", 00:08:51.324 "bdev_name": "Nvme0n1p1" 00:08:51.324 }, 00:08:51.324 { 00:08:51.324 "nbd_device": "/dev/nbd1", 00:08:51.325 "bdev_name": "Nvme0n1p2" 00:08:51.325 }, 00:08:51.325 { 00:08:51.325 "nbd_device": "/dev/nbd10", 00:08:51.325 "bdev_name": "Nvme1n1" 00:08:51.325 }, 00:08:51.325 { 00:08:51.325 "nbd_device": "/dev/nbd11", 00:08:51.325 "bdev_name": "Nvme2n1" 00:08:51.325 }, 00:08:51.325 { 00:08:51.325 "nbd_device": "/dev/nbd12", 00:08:51.325 "bdev_name": "Nvme2n2" 00:08:51.325 }, 00:08:51.325 { 00:08:51.325 "nbd_device": "/dev/nbd13", 00:08:51.325 "bdev_name": "Nvme2n3" 00:08:51.325 }, 00:08:51.325 { 00:08:51.325 "nbd_device": "/dev/nbd14", 00:08:51.325 "bdev_name": "Nvme3n1" 00:08:51.325 } 00:08:51.325 ]' 00:08:51.325 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:51.325 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:51.325 /dev/nbd1 00:08:51.325 /dev/nbd10 00:08:51.325 /dev/nbd11 00:08:51.325 /dev/nbd12 00:08:51.325 /dev/nbd13 00:08:51.325 /dev/nbd14' 00:08:51.325 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:51.325 /dev/nbd1 00:08:51.325 /dev/nbd10 00:08:51.325 /dev/nbd11 00:08:51.325 /dev/nbd12 00:08:51.325 /dev/nbd13 00:08:51.325 /dev/nbd14' 00:08:51.325 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:51.325 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:51.325 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:51.325 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:51.325 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:51.325 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:51.325 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:51.325 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:51.325 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:51.325 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:51.325 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:51.325 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:51.325 256+0 records in 00:08:51.325 256+0 records out 00:08:51.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00771776 s, 136 MB/s 00:08:51.325 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.325 15:17:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:51.584 256+0 records in 00:08:51.584 256+0 records out 00:08:51.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165499 s, 6.3 MB/s 00:08:51.584 15:17:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.584 15:17:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:51.842 256+0 records in 00:08:51.842 256+0 records out 00:08:51.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.176042 s, 6.0 MB/s 00:08:51.842 15:17:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.842 15:17:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:51.842 256+0 records in 00:08:51.842 256+0 records out 00:08:51.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.187145 s, 5.6 MB/s 00:08:51.842 15:17:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.842 15:17:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:52.101 256+0 records in 00:08:52.101 256+0 records out 00:08:52.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151611 s, 6.9 MB/s 00:08:52.101 15:17:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:52.101 15:17:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:52.360 256+0 records in 00:08:52.360 256+0 records out 00:08:52.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.193112 s, 5.4 MB/s 00:08:52.360 15:17:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:52.360 15:17:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:52.360 256+0 records in 00:08:52.360 256+0 records out 00:08:52.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.182162 s, 5.8 MB/s 00:08:52.360 15:17:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:52.360 15:17:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:52.620 256+0 records in 00:08:52.620 256+0 records out 00:08:52.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189957 s, 5.5 MB/s 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.620 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:52.880 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:53.141 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:53.141 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:53.141 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.141 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.141 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:53.141 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:53.141 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.141 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:53.141 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:53.400 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:53.400 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:53.400 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:53.400 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.400 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.400 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:53.400 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:53.400 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.400 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:53.400 15:17:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:53.659 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:53.659 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:53.659 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:53.659 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.659 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.659 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:53.659 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:53.659 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.659 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:53.659 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:53.919 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:53.919 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:53.919 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:53.919 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.919 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.919 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:53.919 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:53.919 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.919 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:53.919 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:54.214 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:54.214 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:54.214 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:54.214 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:54.214 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:54.214 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:54.214 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:54.214 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:54.214 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.214 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:54.483 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:54.483 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:54.483 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:54.483 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:54.483 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:54.483 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:54.483 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:54.483 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:54.483 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:54.483 15:17:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:54.483 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:54.483 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:54.483 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:54.483 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:54.483 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:54.483 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:54.483 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:54.483 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:54.483 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:54.483 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.483 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:54.743 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:54.743 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:54.743 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:54.743 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:54.743 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:54.743 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:54.743 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:54.743 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:54.743 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:54.743 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:54.743 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:54.743 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:54.743 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:54.743 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.002 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:55.002 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:55.002 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:55.002 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:55.002 malloc_lvol_verify 00:08:55.261 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:55.261 eb3cf94d-e1e3-48d3-8b50-24921abf82c2 00:08:55.261 15:17:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:55.521 8af715ee-5536-4662-8eda-74a4331c092e 00:08:55.521 15:17:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:55.780 /dev/nbd0 00:08:55.780 15:17:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:55.780 mke2fs 1.46.5 (30-Dec-2021) 00:08:55.780 Discarding device blocks: 0/4096 done 00:08:55.780 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:55.780 00:08:55.780 Allocating group tables: 0/1 done 00:08:55.780 Writing inode tables: 0/1 done 00:08:55.780 Creating journal (1024 blocks): done 00:08:55.780 Writing superblocks and filesystem accounting information: 0/1 done 00:08:55.780 00:08:55.780 15:17:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:55.780 15:17:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:55.780 15:17:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.780 15:17:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:55.780 15:17:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:55.780 15:17:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:55.780 15:17:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.780 15:17:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:56.039 15:17:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:56.039 15:17:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:56.039 15:17:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:56.039 15:17:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:56.039 15:17:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:56.039 15:17:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:56.039 15:17:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:56.039 15:17:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:56.039 15:17:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:56.039 15:17:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:56.039 15:17:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 67732 00:08:56.039 15:17:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 67732 ']' 00:08:56.039 15:17:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 67732 00:08:56.039 15:17:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:08:56.039 15:17:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:56.039 15:17:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67732 00:08:56.039 15:17:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:56.039 15:17:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:56.039 killing process with pid 67732 00:08:56.039 15:17:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67732' 00:08:56.039 15:17:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # kill 67732 00:08:56.039 15:17:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # wait 67732 00:08:56.973 15:17:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:08:56.973 00:08:56.973 real 0m13.214s 00:08:56.973 user 0m18.576s 00:08:56.973 sys 0m4.373s 00:08:56.973 15:17:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:56.973 15:17:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:56.973 ************************************ 00:08:56.973 END TEST bdev_nbd 00:08:56.973 ************************************ 00:08:56.973 15:17:10 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:08:56.973 15:17:10 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:08:56.973 15:17:10 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:08:56.973 skipping fio tests on NVMe due to multi-ns failures. 00:08:56.973 15:17:10 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:08:56.973 15:17:10 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:56.973 15:17:10 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:56.973 15:17:10 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:56.973 15:17:10 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:56.973 15:17:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.973 15:17:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:57.232 ************************************ 00:08:57.232 START TEST bdev_verify 00:08:57.232 ************************************ 00:08:57.232 15:17:10 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:57.232 [2024-07-11 15:17:10.693447] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:57.232 [2024-07-11 15:17:10.693644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68170 ] 00:08:57.490 [2024-07-11 15:17:10.862511] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:57.490 [2024-07-11 15:17:11.011724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.490 [2024-07-11 15:17:11.011742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.056 Running I/O for 5 seconds... 00:09:03.325 00:09:03.325 Latency(us) 00:09:03.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.325 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:03.325 Verification LBA range: start 0x0 length 0x5e800 00:09:03.325 Nvme0n1p1 : 5.07 1312.36 5.13 0.00 0.00 97247.02 21090.68 85315.96 00:09:03.325 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:03.325 Verification LBA range: start 0x5e800 length 0x5e800 00:09:03.325 Nvme0n1p1 : 5.08 1234.58 4.82 0.00 0.00 103438.08 17039.36 90558.84 00:09:03.325 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:03.325 Verification LBA range: start 0x0 length 0x5e7ff 00:09:03.325 Nvme0n1p2 : 5.07 1311.72 5.12 0.00 0.00 97111.33 22639.71 82932.83 00:09:03.325 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:03.325 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:09:03.325 Nvme0n1p2 : 5.08 1233.70 4.82 0.00 0.00 103298.88 18350.08 86269.21 00:09:03.325 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:03.325 Verification LBA range: start 0x0 length 0xa0000 00:09:03.325 Nvme1n1 : 5.08 1311.16 5.12 0.00 0.00 97013.76 25261.15 80073.08 00:09:03.325 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:03.325 Verification LBA range: start 0xa0000 length 0xa0000 00:09:03.325 Nvme1n1 : 5.09 1232.82 4.82 0.00 0.00 103121.37 20494.89 81979.58 00:09:03.325 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:03.325 Verification LBA range: start 0x0 length 0x80000 00:09:03.325 Nvme2n1 : 5.08 1310.63 5.12 0.00 0.00 96866.01 26571.87 77689.95 00:09:03.325 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:03.325 Verification LBA range: start 0x80000 length 0x80000 00:09:03.325 Nvme2n1 : 5.09 1232.32 4.81 0.00 0.00 102959.96 20852.36 84362.71 00:09:03.325 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:03.325 Verification LBA range: start 0x0 length 0x80000 00:09:03.325 Nvme2n2 : 5.08 1310.11 5.12 0.00 0.00 96705.15 26571.87 80073.08 00:09:03.325 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:03.325 Verification LBA range: start 0x80000 length 0x80000 00:09:03.325 Nvme2n2 : 5.09 1231.84 4.81 0.00 0.00 102773.97 20733.21 85315.96 00:09:03.325 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:03.325 Verification LBA range: start 0x0 length 0x80000 00:09:03.325 Nvme2n3 : 5.08 1309.21 5.11 0.00 0.00 96553.13 20733.21 81979.58 00:09:03.325 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:03.325 Verification LBA range: start 0x80000 length 0x80000 00:09:03.325 Nvme2n3 : 5.09 1231.38 4.81 0.00 0.00 102595.70 18707.55 87222.46 00:09:03.325 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:03.325 Verification LBA range: start 0x0 length 0x20000 00:09:03.325 Nvme3n1 : 5.09 1319.48 5.15 0.00 0.00 95743.99 2859.75 86269.21 00:09:03.325 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:03.325 Verification LBA range: start 0x20000 length 0x20000 00:09:03.325 Nvme3n1 : 5.10 1230.73 4.81 0.00 0.00 102443.74 13583.83 89605.59 00:09:03.325 =================================================================================================================== 00:09:03.325 Total : 17812.05 69.58 0.00 0.00 99753.11 2859.75 90558.84 00:09:04.702 00:09:04.702 real 0m7.419s 00:09:04.702 user 0m13.606s 00:09:04.702 sys 0m0.252s 00:09:04.702 15:17:18 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.702 15:17:18 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:04.702 ************************************ 00:09:04.702 END TEST bdev_verify 00:09:04.702 ************************************ 00:09:04.702 15:17:18 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:04.702 15:17:18 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:04.702 15:17:18 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:09:04.702 15:17:18 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.702 15:17:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:04.702 ************************************ 00:09:04.702 START TEST bdev_verify_big_io 00:09:04.702 ************************************ 00:09:04.702 15:17:18 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:04.702 [2024-07-11 15:17:18.162617] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:04.702 [2024-07-11 15:17:18.162833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68263 ] 00:09:04.960 [2024-07-11 15:17:18.335680] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:04.960 [2024-07-11 15:17:18.494999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.960 [2024-07-11 15:17:18.495014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.896 Running I/O for 5 seconds... 00:09:12.470 00:09:12.470 Latency(us) 00:09:12.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.470 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:12.470 Verification LBA range: start 0x0 length 0x5e80 00:09:12.470 Nvme0n1p1 : 5.86 109.22 6.83 0.00 0.00 1100936.94 23116.33 1166779.11 00:09:12.470 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:12.470 Verification LBA range: start 0x5e80 length 0x5e80 00:09:12.470 Nvme0n1p1 : 5.68 112.70 7.04 0.00 0.00 1092552.05 30265.72 1182031.13 00:09:12.470 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:12.470 Verification LBA range: start 0x0 length 0x5e7f 00:09:12.470 Nvme0n1p2 : 5.86 113.77 7.11 0.00 0.00 1048077.22 91988.71 1014258.97 00:09:12.470 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:12.470 Verification LBA range: start 0x5e7f length 0x5e7f 00:09:12.470 Nvme0n1p2 : 5.83 114.81 7.18 0.00 0.00 1043300.35 93895.21 1021884.97 00:09:12.470 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:12.470 Verification LBA range: start 0x0 length 0xa000 00:09:12.470 Nvme1n1 : 5.86 113.05 7.07 0.00 0.00 1022057.51 141081.13 884616.84 00:09:12.470 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:12.470 Verification LBA range: start 0xa000 length 0xa000 00:09:12.470 Nvme1n1 : 5.83 114.25 7.14 0.00 0.00 1014369.89 144894.14 937998.89 00:09:12.470 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:12.470 Verification LBA range: start 0x0 length 0x8000 00:09:12.470 Nvme2n1 : 5.94 118.44 7.40 0.00 0.00 960249.99 76260.07 884616.84 00:09:12.470 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:12.470 Verification LBA range: start 0x8000 length 0x8000 00:09:12.470 Nvme2n1 : 5.88 119.79 7.49 0.00 0.00 948110.43 44087.85 964689.92 00:09:12.470 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:12.470 Verification LBA range: start 0x0 length 0x8000 00:09:12.470 Nvme2n2 : 6.02 123.98 7.75 0.00 0.00 894374.57 35270.28 884616.84 00:09:12.470 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:12.470 Verification LBA range: start 0x8000 length 0x8000 00:09:12.470 Nvme2n2 : 5.97 124.72 7.80 0.00 0.00 884074.16 56718.43 991380.95 00:09:12.470 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:12.470 Verification LBA range: start 0x0 length 0x8000 00:09:12.470 Nvme2n3 : 6.05 119.99 7.50 0.00 0.00 895562.82 35508.60 1692973.61 00:09:12.470 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:12.470 Verification LBA range: start 0x8000 length 0x8000 00:09:12.470 Nvme2n3 : 6.02 131.59 8.22 0.00 0.00 817002.53 42896.29 1014258.97 00:09:12.470 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:12.470 Verification LBA range: start 0x0 length 0x2000 00:09:12.470 Nvme3n1 : 6.06 133.08 8.32 0.00 0.00 787624.76 4855.62 1731103.65 00:09:12.470 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:12.470 Verification LBA range: start 0x2000 length 0x2000 00:09:12.470 Nvme3n1 : 6.06 147.85 9.24 0.00 0.00 711289.62 2442.71 1037136.99 00:09:12.470 =================================================================================================================== 00:09:12.470 Total : 1697.26 106.08 0.00 0.00 933289.68 2442.71 1731103.65 00:09:13.407 00:09:13.407 real 0m8.819s 00:09:13.407 user 0m16.377s 00:09:13.407 sys 0m0.270s 00:09:13.407 15:17:26 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.407 15:17:26 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:13.407 ************************************ 00:09:13.407 END TEST bdev_verify_big_io 00:09:13.407 ************************************ 00:09:13.407 15:17:26 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:13.407 15:17:26 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:13.407 15:17:26 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:13.407 15:17:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.407 15:17:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:13.407 ************************************ 00:09:13.407 START TEST bdev_write_zeroes 00:09:13.407 ************************************ 00:09:13.407 15:17:26 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:13.666 [2024-07-11 15:17:27.049623] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:13.666 [2024-07-11 15:17:27.049838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68378 ] 00:09:13.666 [2024-07-11 15:17:27.222408] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.925 [2024-07-11 15:17:27.372599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.491 Running I/O for 1 seconds... 00:09:15.425 00:09:15.425 Latency(us) 00:09:15.425 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.425 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:15.425 Nvme0n1p1 : 1.02 6949.68 27.15 0.00 0.00 18351.37 12690.15 28597.53 00:09:15.425 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:15.425 Nvme0n1p2 : 1.02 6938.31 27.10 0.00 0.00 18346.97 12988.04 29074.15 00:09:15.425 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:15.425 Nvme1n1 : 1.03 6928.08 27.06 0.00 0.00 18302.18 13405.09 25976.09 00:09:15.425 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:15.425 Nvme2n1 : 1.03 6917.83 27.02 0.00 0.00 18212.32 11677.32 21805.61 00:09:15.425 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:15.425 Nvme2n2 : 1.03 6907.60 26.98 0.00 0.00 18187.52 10068.71 21805.61 00:09:15.425 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:15.425 Nvme2n3 : 1.03 6952.87 27.16 0.00 0.00 18114.98 9413.35 21805.61 00:09:15.425 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:15.425 Nvme3n1 : 1.03 6942.47 27.12 0.00 0.00 18107.26 8221.79 21686.46 00:09:15.425 =================================================================================================================== 00:09:15.425 Total : 48536.85 189.60 0.00 0.00 18231.49 8221.79 29074.15 00:09:16.801 00:09:16.801 real 0m3.094s 00:09:16.801 user 0m2.743s 00:09:16.801 sys 0m0.230s 00:09:16.801 15:17:30 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:16.801 15:17:30 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:16.801 ************************************ 00:09:16.801 END TEST bdev_write_zeroes 00:09:16.801 ************************************ 00:09:16.801 15:17:30 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:16.801 15:17:30 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:16.801 15:17:30 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:16.801 15:17:30 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.801 15:17:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:16.801 ************************************ 00:09:16.801 START TEST bdev_json_nonenclosed 00:09:16.801 ************************************ 00:09:16.801 15:17:30 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:16.801 [2024-07-11 15:17:30.170084] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:16.801 [2024-07-11 15:17:30.170296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68431 ] 00:09:16.801 [2024-07-11 15:17:30.325892] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.060 [2024-07-11 15:17:30.480430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.060 [2024-07-11 15:17:30.480553] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:17.060 [2024-07-11 15:17:30.480576] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:17.060 [2024-07-11 15:17:30.480591] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:17.319 00:09:17.319 real 0m0.741s 00:09:17.319 user 0m0.522s 00:09:17.319 sys 0m0.114s 00:09:17.319 15:17:30 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:09:17.319 15:17:30 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.319 15:17:30 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:17.319 ************************************ 00:09:17.319 END TEST bdev_json_nonenclosed 00:09:17.319 ************************************ 00:09:17.319 15:17:30 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:09:17.319 15:17:30 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # true 00:09:17.319 15:17:30 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:17.319 15:17:30 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:17.319 15:17:30 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.319 15:17:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:17.319 ************************************ 00:09:17.319 START TEST bdev_json_nonarray 00:09:17.319 ************************************ 00:09:17.320 15:17:30 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:17.579 [2024-07-11 15:17:30.991905] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:17.579 [2024-07-11 15:17:30.992112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68462 ] 00:09:17.579 [2024-07-11 15:17:31.162496] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.838 [2024-07-11 15:17:31.315718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.838 [2024-07-11 15:17:31.315870] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:17.838 [2024-07-11 15:17:31.315894] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:17.838 [2024-07-11 15:17:31.315915] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:18.098 00:09:18.098 real 0m0.790s 00:09:18.098 user 0m0.559s 00:09:18.098 sys 0m0.125s 00:09:18.098 15:17:31 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:09:18.098 15:17:31 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.098 15:17:31 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:18.098 ************************************ 00:09:18.098 END TEST bdev_json_nonarray 00:09:18.098 ************************************ 00:09:18.357 15:17:31 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:09:18.357 15:17:31 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # true 00:09:18.357 15:17:31 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:09:18.357 15:17:31 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:09:18.357 15:17:31 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:18.357 15:17:31 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:18.357 15:17:31 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.357 15:17:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:18.357 ************************************ 00:09:18.357 START TEST bdev_gpt_uuid 00:09:18.357 ************************************ 00:09:18.357 15:17:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1123 -- # bdev_gpt_uuid 00:09:18.357 15:17:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:09:18.357 15:17:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:09:18.357 15:17:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=68493 00:09:18.357 15:17:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:18.357 15:17:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:18.357 15:17:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 68493 00:09:18.357 15:17:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@829 -- # '[' -z 68493 ']' 00:09:18.357 15:17:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.357 15:17:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:18.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.357 15:17:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.357 15:17:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:18.357 15:17:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:18.357 [2024-07-11 15:17:31.886990] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:18.357 [2024-07-11 15:17:31.887300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68493 ] 00:09:18.616 [2024-07-11 15:17:32.068698] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.616 [2024-07-11 15:17:32.219648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.553 15:17:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:19.553 15:17:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # return 0 00:09:19.553 15:17:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:19.553 15:17:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.553 15:17:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:19.553 Some configs were skipped because the RPC state that can call them passed over. 00:09:19.553 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.553 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:09:19.553 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.553 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:19.811 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.811 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:19.812 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.812 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:19.812 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.812 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:09:19.812 { 00:09:19.812 "name": "Nvme0n1p1", 00:09:19.812 "aliases": [ 00:09:19.812 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:19.812 ], 00:09:19.812 "product_name": "GPT Disk", 00:09:19.812 "block_size": 4096, 00:09:19.812 "num_blocks": 774144, 00:09:19.812 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:19.812 "md_size": 64, 00:09:19.812 "md_interleave": false, 00:09:19.812 "dif_type": 0, 00:09:19.812 "assigned_rate_limits": { 00:09:19.812 "rw_ios_per_sec": 0, 00:09:19.812 "rw_mbytes_per_sec": 0, 00:09:19.812 "r_mbytes_per_sec": 0, 00:09:19.812 "w_mbytes_per_sec": 0 00:09:19.812 }, 00:09:19.812 "claimed": false, 00:09:19.812 "zoned": false, 00:09:19.812 "supported_io_types": { 00:09:19.812 "read": true, 00:09:19.812 "write": true, 00:09:19.812 "unmap": true, 00:09:19.812 "flush": true, 00:09:19.812 "reset": true, 00:09:19.812 "nvme_admin": false, 00:09:19.812 "nvme_io": false, 00:09:19.812 "nvme_io_md": false, 00:09:19.812 "write_zeroes": true, 00:09:19.812 "zcopy": false, 00:09:19.812 "get_zone_info": false, 00:09:19.812 "zone_management": false, 00:09:19.812 "zone_append": false, 00:09:19.812 "compare": true, 00:09:19.812 "compare_and_write": false, 00:09:19.812 "abort": true, 00:09:19.812 "seek_hole": false, 00:09:19.812 "seek_data": false, 00:09:19.812 "copy": true, 00:09:19.812 "nvme_iov_md": false 00:09:19.812 }, 00:09:19.812 "driver_specific": { 00:09:19.812 "gpt": { 00:09:19.812 "base_bdev": "Nvme0n1", 00:09:19.812 "offset_blocks": 256, 00:09:19.812 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:19.812 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:19.812 "partition_name": "SPDK_TEST_first" 00:09:19.812 } 00:09:19.812 } 00:09:19.812 } 00:09:19.812 ]' 00:09:19.812 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:09:19.812 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:09:19.812 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:09:19.812 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:19.812 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:19.812 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:19.812 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:19.812 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.812 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:19.812 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.812 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:09:19.812 { 00:09:19.812 "name": "Nvme0n1p2", 00:09:19.812 "aliases": [ 00:09:19.812 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:19.812 ], 00:09:19.812 "product_name": "GPT Disk", 00:09:19.812 "block_size": 4096, 00:09:19.812 "num_blocks": 774143, 00:09:19.812 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:19.812 "md_size": 64, 00:09:19.812 "md_interleave": false, 00:09:19.812 "dif_type": 0, 00:09:19.812 "assigned_rate_limits": { 00:09:19.812 "rw_ios_per_sec": 0, 00:09:19.812 "rw_mbytes_per_sec": 0, 00:09:19.812 "r_mbytes_per_sec": 0, 00:09:19.812 "w_mbytes_per_sec": 0 00:09:19.812 }, 00:09:19.812 "claimed": false, 00:09:19.812 "zoned": false, 00:09:19.812 "supported_io_types": { 00:09:19.812 "read": true, 00:09:19.812 "write": true, 00:09:19.812 "unmap": true, 00:09:19.812 "flush": true, 00:09:19.812 "reset": true, 00:09:19.812 "nvme_admin": false, 00:09:19.812 "nvme_io": false, 00:09:19.812 "nvme_io_md": false, 00:09:19.812 "write_zeroes": true, 00:09:19.812 "zcopy": false, 00:09:19.812 "get_zone_info": false, 00:09:19.812 "zone_management": false, 00:09:19.812 "zone_append": false, 00:09:19.812 "compare": true, 00:09:19.812 "compare_and_write": false, 00:09:19.812 "abort": true, 00:09:19.812 "seek_hole": false, 00:09:19.812 "seek_data": false, 00:09:19.812 "copy": true, 00:09:19.812 "nvme_iov_md": false 00:09:19.812 }, 00:09:19.812 "driver_specific": { 00:09:19.812 "gpt": { 00:09:19.812 "base_bdev": "Nvme0n1", 00:09:19.812 "offset_blocks": 774400, 00:09:19.812 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:19.812 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:19.812 "partition_name": "SPDK_TEST_second" 00:09:19.812 } 00:09:19.812 } 00:09:19.812 } 00:09:19.812 ]' 00:09:19.812 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:09:20.071 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:09:20.071 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:09:20.071 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:20.071 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:20.071 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:20.071 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 68493 00:09:20.071 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@948 -- # '[' -z 68493 ']' 00:09:20.071 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # kill -0 68493 00:09:20.071 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # uname 00:09:20.071 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:20.071 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68493 00:09:20.071 killing process with pid 68493 00:09:20.071 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:20.071 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:20.071 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68493' 00:09:20.071 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # kill 68493 00:09:20.071 15:17:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # wait 68493 00:09:21.975 ************************************ 00:09:21.975 END TEST bdev_gpt_uuid 00:09:21.975 ************************************ 00:09:21.975 00:09:21.975 real 0m3.518s 00:09:21.975 user 0m3.839s 00:09:21.975 sys 0m0.447s 00:09:21.975 15:17:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.975 15:17:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:21.975 15:17:35 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:21.975 15:17:35 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:09:21.975 15:17:35 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:09:21.975 15:17:35 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:09:21.975 15:17:35 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:21.975 15:17:35 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:21.975 15:17:35 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:21.975 15:17:35 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:21.975 15:17:35 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:21.975 15:17:35 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:22.234 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:22.493 Waiting for block devices as requested 00:09:22.493 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.493 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.493 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:22.752 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:28.039 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:28.039 15:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:09:28.039 15:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:09:28.039 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:28.039 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:09:28.039 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:28.039 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:09:28.039 15:17:41 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:28.039 00:09:28.039 real 1m1.824s 00:09:28.039 user 1m18.804s 00:09:28.039 sys 0m9.401s 00:09:28.039 15:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.039 ************************************ 00:09:28.039 END TEST blockdev_nvme_gpt 00:09:28.039 ************************************ 00:09:28.039 15:17:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:28.039 15:17:41 -- common/autotest_common.sh@1142 -- # return 0 00:09:28.039 15:17:41 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:28.039 15:17:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:28.039 15:17:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.039 15:17:41 -- common/autotest_common.sh@10 -- # set +x 00:09:28.039 ************************************ 00:09:28.039 START TEST nvme 00:09:28.039 ************************************ 00:09:28.039 15:17:41 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:28.039 * Looking for test storage... 00:09:28.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:28.039 15:17:41 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:28.604 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:29.171 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.171 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.429 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.429 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.429 15:17:42 nvme -- nvme/nvme.sh@79 -- # uname 00:09:29.429 15:17:42 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:29.429 15:17:42 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:29.429 15:17:42 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:29.429 15:17:42 nvme -- common/autotest_common.sh@1080 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:29.429 15:17:42 nvme -- common/autotest_common.sh@1066 -- # _randomize_va_space=2 00:09:29.429 15:17:42 nvme -- common/autotest_common.sh@1067 -- # echo 0 00:09:29.429 15:17:42 nvme -- common/autotest_common.sh@1069 -- # stubpid=69130 00:09:29.429 15:17:42 nvme -- common/autotest_common.sh@1068 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:29.429 15:17:42 nvme -- common/autotest_common.sh@1070 -- # echo Waiting for stub to ready for secondary processes... 00:09:29.429 Waiting for stub to ready for secondary processes... 00:09:29.429 15:17:42 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:29.429 15:17:42 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/69130 ]] 00:09:29.429 15:17:42 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:09:29.429 [2024-07-11 15:17:42.978683] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:29.429 [2024-07-11 15:17:42.978860] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:30.363 [2024-07-11 15:17:43.779597] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:30.363 15:17:43 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:30.363 15:17:43 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/69130 ]] 00:09:30.363 15:17:43 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:09:30.622 [2024-07-11 15:17:43.998084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.622 [2024-07-11 15:17:43.998239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.622 [2024-07-11 15:17:43.998252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.622 [2024-07-11 15:17:44.019431] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:30.622 [2024-07-11 15:17:44.019470] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:30.622 [2024-07-11 15:17:44.032718] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:30.622 [2024-07-11 15:17:44.032868] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:30.622 [2024-07-11 15:17:44.035741] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:30.622 [2024-07-11 15:17:44.035948] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:30.622 [2024-07-11 15:17:44.036057] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:30.622 [2024-07-11 15:17:44.038555] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:30.622 [2024-07-11 15:17:44.038756] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:30.622 [2024-07-11 15:17:44.038824] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:30.622 [2024-07-11 15:17:44.041120] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:30.622 [2024-07-11 15:17:44.041309] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:30.622 [2024-07-11 15:17:44.041422] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:30.622 [2024-07-11 15:17:44.041488] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:30.622 [2024-07-11 15:17:44.041546] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:31.557 15:17:44 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:31.557 done. 00:09:31.557 15:17:44 nvme -- common/autotest_common.sh@1076 -- # echo done. 00:09:31.557 15:17:44 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:31.557 15:17:44 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:09:31.557 15:17:44 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.557 15:17:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.557 ************************************ 00:09:31.557 START TEST nvme_reset 00:09:31.557 ************************************ 00:09:31.557 15:17:44 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:31.815 Initializing NVMe Controllers 00:09:31.815 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:31.815 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:31.815 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:31.815 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:31.815 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:31.815 00:09:31.815 real 0m0.292s 00:09:31.815 user 0m0.114s 00:09:31.815 sys 0m0.131s 00:09:31.815 15:17:45 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.815 15:17:45 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:31.815 ************************************ 00:09:31.815 END TEST nvme_reset 00:09:31.815 ************************************ 00:09:31.815 15:17:45 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:31.815 15:17:45 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:31.815 15:17:45 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:31.815 15:17:45 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.815 15:17:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.815 ************************************ 00:09:31.815 START TEST nvme_identify 00:09:31.815 ************************************ 00:09:31.815 15:17:45 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:09:31.815 15:17:45 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:31.815 15:17:45 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:31.815 15:17:45 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:31.815 15:17:45 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:31.815 15:17:45 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:09:31.815 15:17:45 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:09:31.815 15:17:45 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:31.815 15:17:45 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:31.815 15:17:45 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:09:31.815 15:17:45 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:09:31.815 15:17:45 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:31.815 15:17:45 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:32.076 [2024-07-11 15:17:45.618414] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 69163 terminated unexpected 00:09:32.076 ===================================================== 00:09:32.076 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:32.076 ===================================================== 00:09:32.076 Controller Capabilities/Features 00:09:32.076 ================================ 00:09:32.076 Vendor ID: 1b36 00:09:32.076 Subsystem Vendor ID: 1af4 00:09:32.076 Serial Number: 12341 00:09:32.076 Model Number: QEMU NVMe Ctrl 00:09:32.076 Firmware Version: 8.0.0 00:09:32.076 Recommended Arb Burst: 6 00:09:32.076 IEEE OUI Identifier: 00 54 52 00:09:32.076 Multi-path I/O 00:09:32.076 May have multiple subsystem ports: No 00:09:32.076 May have multiple controllers: No 00:09:32.076 Associated with SR-IOV VF: No 00:09:32.076 Max Data Transfer Size: 524288 00:09:32.076 Max Number of Namespaces: 256 00:09:32.076 Max Number of I/O Queues: 64 00:09:32.076 NVMe Specification Version (VS): 1.4 00:09:32.076 NVMe Specification Version (Identify): 1.4 00:09:32.076 Maximum Queue Entries: 2048 00:09:32.076 Contiguous Queues Required: Yes 00:09:32.076 Arbitration Mechanisms Supported 00:09:32.076 Weighted Round Robin: Not Supported 00:09:32.076 Vendor Specific: Not Supported 00:09:32.076 Reset Timeout: 7500 ms 00:09:32.076 Doorbell Stride: 4 bytes 00:09:32.076 NVM Subsystem Reset: Not Supported 00:09:32.076 Command Sets Supported 00:09:32.076 NVM Command Set: Supported 00:09:32.076 Boot Partition: Not Supported 00:09:32.076 Memory Page Size Minimum: 4096 bytes 00:09:32.076 Memory Page Size Maximum: 65536 bytes 00:09:32.076 Persistent Memory Region: Not Supported 00:09:32.076 Optional Asynchronous Events Supported 00:09:32.076 Namespace Attribute Notices: Supported 00:09:32.076 Firmware Activation Notices: Not Supported 00:09:32.076 ANA Change Notices: Not Supported 00:09:32.076 PLE Aggregate Log Change Notices: Not Supported 00:09:32.076 LBA Status Info Alert Notices: Not Supported 00:09:32.076 EGE Aggregate Log Change Notices: Not Supported 00:09:32.076 Normal NVM Subsystem Shutdown event: Not Supported 00:09:32.076 Zone Descriptor Change Notices: Not Supported 00:09:32.076 Discovery Log Change Notices: Not Supported 00:09:32.076 Controller Attributes 00:09:32.076 128-bit Host Identifier: Not Supported 00:09:32.076 Non-Operational Permissive Mode: Not Supported 00:09:32.076 NVM Sets: Not Supported 00:09:32.076 Read Recovery Levels: Not Supported 00:09:32.077 Endurance Groups: Not Supported 00:09:32.077 Predictable Latency Mode: Not Supported 00:09:32.077 Traffic Based Keep ALive: Not Supported 00:09:32.077 Namespace Granularity: Not Supported 00:09:32.077 SQ Associations: Not Supported 00:09:32.077 UUID List: Not Supported 00:09:32.077 Multi-Domain Subsystem: Not Supported 00:09:32.077 Fixed Capacity Management: Not Supported 00:09:32.077 Variable Capacity Management: Not Supported 00:09:32.077 Delete Endurance Group: Not Supported 00:09:32.077 Delete NVM Set: Not Supported 00:09:32.077 Extended LBA Formats Supported: Supported 00:09:32.077 Flexible Data Placement Supported: Not Supported 00:09:32.077 00:09:32.077 Controller Memory Buffer Support 00:09:32.077 ================================ 00:09:32.077 Supported: No 00:09:32.077 00:09:32.077 Persistent Memory Region Support 00:09:32.077 ================================ 00:09:32.077 Supported: No 00:09:32.077 00:09:32.077 Admin Command Set Attributes 00:09:32.077 ============================ 00:09:32.077 Security Send/Receive: Not Supported 00:09:32.077 Format NVM: Supported 00:09:32.077 Firmware Activate/Download: Not Supported 00:09:32.077 Namespace Management: Supported 00:09:32.077 Device Self-Test: Not Supported 00:09:32.077 Directives: Supported 00:09:32.077 NVMe-MI: Not Supported 00:09:32.077 Virtualization Management: Not Supported 00:09:32.077 Doorbell Buffer Config: Supported 00:09:32.077 Get LBA Status Capability: Not Supported 00:09:32.077 Command & Feature Lockdown Capability: Not Supported 00:09:32.077 Abort Command Limit: 4 00:09:32.077 Async Event Request Limit: 4 00:09:32.077 Number of Firmware Slots: N/A 00:09:32.077 Firmware Slot 1 Read-Only: N/A 00:09:32.077 Firmware Activation Without Reset: N/A 00:09:32.077 Multiple Update Detection Support: N/A 00:09:32.077 Firmware Update Granularity: No Information Provided 00:09:32.077 Per-Namespace SMART Log: Yes 00:09:32.077 Asymmetric Namespace Access Log Page: Not Supported 00:09:32.077 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:32.077 Command Effects Log Page: Supported 00:09:32.077 Get Log Page Extended Data: Supported 00:09:32.077 Telemetry Log Pages: Not Supported 00:09:32.077 Persistent Event Log Pages: Not Supported 00:09:32.077 Supported Log Pages Log Page: May Support 00:09:32.077 Commands Supported & Effects Log Page: Not Supported 00:09:32.077 Feature Identifiers & Effects Log Page:May Support 00:09:32.077 NVMe-MI Commands & Effects Log Page: May Support 00:09:32.077 Data Area 4 for Telemetry Log: Not Supported 00:09:32.077 Error Log Page Entries Supported: 1 00:09:32.077 Keep Alive: Not Supported 00:09:32.077 00:09:32.077 NVM Command Set Attributes 00:09:32.077 ========================== 00:09:32.077 Submission Queue Entry Size 00:09:32.077 Max: 64 00:09:32.077 Min: 64 00:09:32.077 Completion Queue Entry Size 00:09:32.077 Max: 16 00:09:32.077 Min: 16 00:09:32.077 Number of Namespaces: 256 00:09:32.077 Compare Command: Supported 00:09:32.077 Write Uncorrectable Command: Not Supported 00:09:32.077 Dataset Management Command: Supported 00:09:32.077 Write Zeroes Command: Supported 00:09:32.077 Set Features Save Field: Supported 00:09:32.077 Reservations: Not Supported 00:09:32.077 Timestamp: Supported 00:09:32.077 Copy: Supported 00:09:32.077 Volatile Write Cache: Present 00:09:32.077 Atomic Write Unit (Normal): 1 00:09:32.077 Atomic Write Unit (PFail): 1 00:09:32.077 Atomic Compare & Write Unit: 1 00:09:32.077 Fused Compare & Write: Not Supported 00:09:32.077 Scatter-Gather List 00:09:32.077 SGL Command Set: Supported 00:09:32.077 SGL Keyed: Not Supported 00:09:32.077 SGL Bit Bucket Descriptor: Not Supported 00:09:32.077 SGL Metadata Pointer: Not Supported 00:09:32.077 Oversized SGL: Not Supported 00:09:32.077 SGL Metadata Address: Not Supported 00:09:32.077 SGL Offset: Not Supported 00:09:32.077 Transport SGL Data Block: Not Supported 00:09:32.077 Replay Protected Memory Block: Not Supported 00:09:32.077 00:09:32.077 Firmware Slot Information 00:09:32.077 ========================= 00:09:32.077 Active slot: 1 00:09:32.077 Slot 1 Firmware Revision: 1.0 00:09:32.077 00:09:32.077 00:09:32.077 Commands Supported and Effects 00:09:32.077 ============================== 00:09:32.077 Admin Commands 00:09:32.077 -------------- 00:09:32.077 Delete I/O Submission Queue (00h): Supported 00:09:32.077 Create I/O Submission Queue (01h): Supported 00:09:32.077 Get Log Page (02h): Supported 00:09:32.077 Delete I/O Completion Queue (04h): Supported 00:09:32.077 Create I/O Completion Queue (05h): Supported 00:09:32.077 Identify (06h): Supported 00:09:32.077 Abort (08h): Supported 00:09:32.077 Set Features (09h): Supported 00:09:32.077 Get Features (0Ah): Supported 00:09:32.077 Asynchronous Event Request (0Ch): Supported 00:09:32.077 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:32.077 Directive Send (19h): Supported 00:09:32.077 Directive Receive (1Ah): Supported 00:09:32.077 Virtualization Management (1Ch): Supported 00:09:32.077 Doorbell Buffer Config (7Ch): Supported 00:09:32.077 Format NVM (80h): Supported LBA-Change 00:09:32.077 I/O Commands 00:09:32.077 ------------ 00:09:32.077 Flush (00h): Supported LBA-Change 00:09:32.077 Write (01h): Supported LBA-Change 00:09:32.077 Read (02h): Supported 00:09:32.077 Compare (05h): Supported 00:09:32.077 Write Zeroes (08h): Supported LBA-Change 00:09:32.077 Dataset Management (09h): Supported LBA-Change 00:09:32.077 Unknown (0Ch): Supported 00:09:32.077 Unknown (12h): Supported 00:09:32.077 Copy (19h): Supported LBA-Change 00:09:32.077 Unknown (1Dh): Supported LBA-Change 00:09:32.077 00:09:32.077 Error Log 00:09:32.077 ========= 00:09:32.077 00:09:32.077 Arbitration 00:09:32.077 =========== 00:09:32.077 Arbitration Burst: no limit 00:09:32.077 00:09:32.077 Power Management 00:09:32.077 ================ 00:09:32.077 Number of Power States: 1 00:09:32.077 Current Power State: Power State #0 00:09:32.077 Power State #0: 00:09:32.077 Max Power: 25.00 W 00:09:32.077 Non-Operational State: Operational 00:09:32.077 Entry Latency: 16 microseconds 00:09:32.077 Exit Latency: 4 microseconds 00:09:32.077 Relative Read Throughput: 0 00:09:32.077 Relative Read Latency: 0 00:09:32.077 Relative Write Throughput: 0 00:09:32.077 Relative Write Latency: 0 00:09:32.077 Idle Power[2024-07-11 15:17:45.620367] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 69163 terminated unexpected 00:09:32.077 : Not Reported 00:09:32.077 Active Power: Not Reported 00:09:32.077 Non-Operational Permissive Mode: Not Supported 00:09:32.077 00:09:32.077 Health Information 00:09:32.077 ================== 00:09:32.077 Critical Warnings: 00:09:32.077 Available Spare Space: OK 00:09:32.077 Temperature: OK 00:09:32.077 Device Reliability: OK 00:09:32.077 Read Only: No 00:09:32.077 Volatile Memory Backup: OK 00:09:32.077 Current Temperature: 323 Kelvin (50 Celsius) 00:09:32.077 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:32.077 Available Spare: 0% 00:09:32.077 Available Spare Threshold: 0% 00:09:32.077 Life Percentage Used: 0% 00:09:32.077 Data Units Read: 745 00:09:32.077 Data Units Written: 596 00:09:32.077 Host Read Commands: 33987 00:09:32.077 Host Write Commands: 31762 00:09:32.077 Controller Busy Time: 0 minutes 00:09:32.077 Power Cycles: 0 00:09:32.077 Power On Hours: 0 hours 00:09:32.077 Unsafe Shutdowns: 0 00:09:32.077 Unrecoverable Media Errors: 0 00:09:32.077 Lifetime Error Log Entries: 0 00:09:32.077 Warning Temperature Time: 0 minutes 00:09:32.077 Critical Temperature Time: 0 minutes 00:09:32.077 00:09:32.077 Number of Queues 00:09:32.077 ================ 00:09:32.077 Number of I/O Submission Queues: 64 00:09:32.077 Number of I/O Completion Queues: 64 00:09:32.077 00:09:32.077 ZNS Specific Controller Data 00:09:32.077 ============================ 00:09:32.077 Zone Append Size Limit: 0 00:09:32.077 00:09:32.077 00:09:32.077 Active Namespaces 00:09:32.077 ================= 00:09:32.077 Namespace ID:1 00:09:32.077 Error Recovery Timeout: Unlimited 00:09:32.077 Command Set Identifier: NVM (00h) 00:09:32.077 Deallocate: Supported 00:09:32.077 Deallocated/Unwritten Error: Supported 00:09:32.077 Deallocated Read Value: All 0x00 00:09:32.077 Deallocate in Write Zeroes: Not Supported 00:09:32.077 Deallocated Guard Field: 0xFFFF 00:09:32.077 Flush: Supported 00:09:32.077 Reservation: Not Supported 00:09:32.077 Namespace Sharing Capabilities: Private 00:09:32.077 Size (in LBAs): 1310720 (5GiB) 00:09:32.077 Capacity (in LBAs): 1310720 (5GiB) 00:09:32.077 Utilization (in LBAs): 1310720 (5GiB) 00:09:32.077 Thin Provisioning: Not Supported 00:09:32.077 Per-NS Atomic Units: No 00:09:32.077 Maximum Single Source Range Length: 128 00:09:32.077 Maximum Copy Length: 128 00:09:32.077 Maximum Source Range Count: 128 00:09:32.077 NGUID/EUI64 Never Reused: No 00:09:32.077 Namespace Write Protected: No 00:09:32.077 Number of LBA Formats: 8 00:09:32.077 Current LBA Format: LBA Format #04 00:09:32.077 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:32.078 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:32.078 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:32.078 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:32.078 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:32.078 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:32.078 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:32.078 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:32.078 00:09:32.078 NVM Specific Namespace Data 00:09:32.078 =========================== 00:09:32.078 Logical Block Storage Tag Mask: 0 00:09:32.078 Protection Information Capabilities: 00:09:32.078 16b Guard Protection Information Storage Tag Support: No 00:09:32.078 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:32.078 Storage Tag Check Read Support: No 00:09:32.078 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.078 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.078 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.078 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.078 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.078 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.078 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.078 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.078 ===================================================== 00:09:32.078 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:32.078 ===================================================== 00:09:32.078 Controller Capabilities/Features 00:09:32.078 ================================ 00:09:32.078 Vendor ID: 1b36 00:09:32.078 Subsystem Vendor ID: 1af4 00:09:32.078 Serial Number: 12343 00:09:32.078 Model Number: QEMU NVMe Ctrl 00:09:32.078 Firmware Version: 8.0.0 00:09:32.078 Recommended Arb Burst: 6 00:09:32.078 IEEE OUI Identifier: 00 54 52 00:09:32.078 Multi-path I/O 00:09:32.078 May have multiple subsystem ports: No 00:09:32.078 May have multiple controllers: Yes 00:09:32.078 Associated with SR-IOV VF: No 00:09:32.078 Max Data Transfer Size: 524288 00:09:32.078 Max Number of Namespaces: 256 00:09:32.078 Max Number of I/O Queues: 64 00:09:32.078 NVMe Specification Version (VS): 1.4 00:09:32.078 NVMe Specification Version (Identify): 1.4 00:09:32.078 Maximum Queue Entries: 2048 00:09:32.078 Contiguous Queues Required: Yes 00:09:32.078 Arbitration Mechanisms Supported 00:09:32.078 Weighted Round Robin: Not Supported 00:09:32.078 Vendor Specific: Not Supported 00:09:32.078 Reset Timeout: 7500 ms 00:09:32.078 Doorbell Stride: 4 bytes 00:09:32.078 NVM Subsystem Reset: Not Supported 00:09:32.078 Command Sets Supported 00:09:32.078 NVM Command Set: Supported 00:09:32.078 Boot Partition: Not Supported 00:09:32.078 Memory Page Size Minimum: 4096 bytes 00:09:32.078 Memory Page Size Maximum: 65536 bytes 00:09:32.078 Persistent Memory Region: Not Supported 00:09:32.078 Optional Asynchronous Events Supported 00:09:32.078 Namespace Attribute Notices: Supported 00:09:32.078 Firmware Activation Notices: Not Supported 00:09:32.078 ANA Change Notices: Not Supported 00:09:32.078 PLE Aggregate Log Change Notices: Not Supported 00:09:32.078 LBA Status Info Alert Notices: Not Supported 00:09:32.078 EGE Aggregate Log Change Notices: Not Supported 00:09:32.078 Normal NVM Subsystem Shutdown event: Not Supported 00:09:32.078 Zone Descriptor Change Notices: Not Supported 00:09:32.078 Discovery Log Change Notices: Not Supported 00:09:32.078 Controller Attributes 00:09:32.078 128-bit Host Identifier: Not Supported 00:09:32.078 Non-Operational Permissive Mode: Not Supported 00:09:32.078 NVM Sets: Not Supported 00:09:32.078 Read Recovery Levels: Not Supported 00:09:32.078 Endurance Groups: Supported 00:09:32.078 Predictable Latency Mode: Not Supported 00:09:32.078 Traffic Based Keep ALive: Not Supported 00:09:32.078 Namespace Granularity: Not Supported 00:09:32.078 SQ Associations: Not Supported 00:09:32.078 UUID List: Not Supported 00:09:32.078 Multi-Domain Subsystem: Not Supported 00:09:32.078 Fixed Capacity Management: Not Supported 00:09:32.078 Variable Capacity Management: Not Supported 00:09:32.078 Delete Endurance Group: Not Supported 00:09:32.078 Delete NVM Set: Not Supported 00:09:32.078 Extended LBA Formats Supported: Supported 00:09:32.078 Flexible Data Placement Supported: Supported 00:09:32.078 00:09:32.078 Controller Memory Buffer Support 00:09:32.078 ================================ 00:09:32.078 Supported: No 00:09:32.078 00:09:32.078 Persistent Memory Region Support 00:09:32.078 ================================ 00:09:32.078 Supported: No 00:09:32.078 00:09:32.078 Admin Command Set Attributes 00:09:32.078 ============================ 00:09:32.078 Security Send/Receive: Not Supported 00:09:32.078 Format NVM: Supported 00:09:32.078 Firmware Activate/Download: Not Supported 00:09:32.078 Namespace Management: Supported 00:09:32.078 Device Self-Test: Not Supported 00:09:32.078 Directives: Supported 00:09:32.078 NVMe-MI: Not Supported 00:09:32.078 Virtualization Management: Not Supported 00:09:32.078 Doorbell Buffer Config: Supported 00:09:32.078 Get LBA Status Capability: Not Supported 00:09:32.078 Command & Feature Lockdown Capability: Not Supported 00:09:32.078 Abort Command Limit: 4 00:09:32.078 Async Event Request Limit: 4 00:09:32.078 Number of Firmware Slots: N/A 00:09:32.078 Firmware Slot 1 Read-Only: N/A 00:09:32.078 Firmware Activation Without Reset: N/A 00:09:32.078 Multiple Update Detection Support: N/A 00:09:32.078 Firmware Update Granularity: No Information Provided 00:09:32.078 Per-Namespace SMART Log: Yes 00:09:32.078 Asymmetric Namespace Access Log Page: Not Supported 00:09:32.078 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:32.078 Command Effects Log Page: Supported 00:09:32.078 Get Log Page Extended Data: Supported 00:09:32.078 Telemetry Log Pages: Not Supported 00:09:32.078 Persistent Event Log Pages: Not Supported 00:09:32.078 Supported Log Pages Log Page: May Support 00:09:32.078 Commands Supported & Effects Log Page: Not Supported 00:09:32.078 Feature Identifiers & Effects Log Page:May Support 00:09:32.078 NVMe-MI Commands & Effects Log Page: May Support 00:09:32.078 Data Area 4 for Telemetry Log: Not Supported 00:09:32.078 Error Log Page Entries Supported: 1 00:09:32.078 Keep Alive: Not Supported 00:09:32.078 00:09:32.078 NVM Command Set Attributes 00:09:32.078 ========================== 00:09:32.078 Submission Queue Entry Size 00:09:32.078 Max: 64 00:09:32.078 Min: 64 00:09:32.078 Completion Queue Entry Size 00:09:32.078 Max: 16 00:09:32.078 Min: 16 00:09:32.079 Number of Namespaces: 256 00:09:32.079 Compare Command: Supported 00:09:32.079 Write Uncorrectable Command: Not Supported 00:09:32.079 Dataset Management Command: Supported 00:09:32.079 Write Zeroes Command: Supported 00:09:32.079 Set Features Save Field: Supported 00:09:32.079 Reservations: Not Supported 00:09:32.079 Timestamp: Supported 00:09:32.079 Copy: Supported 00:09:32.079 Volatile Write Cache: Present 00:09:32.079 Atomic Write Unit (Normal): 1 00:09:32.079 Atomic Write Unit (PFail): 1 00:09:32.079 Atomic Compare & Write Unit: 1 00:09:32.079 Fused Compare & Write: Not Supported 00:09:32.079 Scatter-Gather List 00:09:32.079 SGL Command Set: Supported 00:09:32.079 SGL Keyed: Not Supported 00:09:32.079 SGL Bit Bucket Descriptor: Not Supported 00:09:32.079 SGL Metadata Pointer: Not Supported 00:09:32.079 Oversized SGL: Not Supported 00:09:32.079 SGL Metadata Address: Not Supported 00:09:32.079 SGL Offset: Not Supported 00:09:32.079 Transport SGL Data Block: Not Supported 00:09:32.079 Replay Protected Memory Block: Not Supported 00:09:32.079 00:09:32.079 Firmware Slot Information 00:09:32.079 ========================= 00:09:32.079 Active slot: 1 00:09:32.079 Slot 1 Firmware Revision: 1.0 00:09:32.079 00:09:32.079 00:09:32.079 Commands Supported and Effects 00:09:32.079 ============================== 00:09:32.079 Admin Commands 00:09:32.079 -------------- 00:09:32.079 Delete I/O Submission Queue (00h): Supported 00:09:32.079 Create I/O Submission Queue (01h): Supported 00:09:32.079 Get Log Page (02h): Supported 00:09:32.079 Delete I/O Completion Queue (04h): Supported 00:09:32.079 Create I/O Completion Queue (05h): Supported 00:09:32.079 Identify (06h): Supported 00:09:32.079 Abort (08h): Supported 00:09:32.079 Set Features (09h): Supported 00:09:32.079 Get Features (0Ah): Supported 00:09:32.079 Asynchronous Event Request (0Ch): Supported 00:09:32.079 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:32.079 Directive Send (19h): Supported 00:09:32.079 Directive Receive (1Ah): Supported 00:09:32.079 Virtualization Management (1Ch): Supported 00:09:32.079 Doorbell Buffer Config (7Ch): Supported 00:09:32.079 Format NVM (80h): Supported LBA-Change 00:09:32.079 I/O Commands 00:09:32.079 ------------ 00:09:32.079 Flush (00h): Supported LBA-Change 00:09:32.079 Write (01h): Supported LBA-Change 00:09:32.079 Read (02h): Supported 00:09:32.079 Compare (05h): Supported 00:09:32.079 Write Zeroes (08h): Supported LBA-Change 00:09:32.079 Dataset Management (09h): Supported LBA-Change 00:09:32.079 Unknown (0Ch): Supported 00:09:32.079 Unknown (12h): Supported 00:09:32.079 Copy (19h): Supported LBA-Change 00:09:32.079 Unknown (1Dh): Supported LBA-Change 00:09:32.079 00:09:32.079 Error Log 00:09:32.079 ========= 00:09:32.079 00:09:32.079 Arbitration 00:09:32.079 =========== 00:09:32.079 Arbitration Burst: no limit 00:09:32.079 00:09:32.079 Power Management 00:09:32.079 ================ 00:09:32.079 Number of Power States: 1 00:09:32.079 Current Power State: Power State #0 00:09:32.079 Power State #0: 00:09:32.079 Max Power: 25.00 W 00:09:32.079 Non-Operational State: Operational 00:09:32.079 Entry Latency: 16 microseconds 00:09:32.079 Exit Latency: 4 microseconds 00:09:32.079 Relative Read Throughput: 0 00:09:32.079 Relative Read Latency: 0 00:09:32.079 Relative Write Throughput: 0 00:09:32.079 Relative Write Latency: 0 00:09:32.079 Idle Power: Not Reported 00:09:32.079 Active Power: Not Reported 00:09:32.079 Non-Operational Permissive Mode: Not Supported 00:09:32.079 00:09:32.079 Health Information 00:09:32.079 ================== 00:09:32.079 Critical Warnings: 00:09:32.079 Available Spare Space: OK 00:09:32.079 Temperature: OK 00:09:32.079 Device Reliability: OK 00:09:32.079 Read Only: No 00:09:32.079 Volatile Memory Backup: OK 00:09:32.079 Current Temperature: 323 Kelvin (50 Celsius) 00:09:32.079 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:32.079 Available Spare: 0% 00:09:32.079 Available Spare Threshold: 0% 00:09:32.079 Life Percentage Used: 0% 00:09:32.079 Data Units Read: 798 00:09:32.079 Data Units Written: 692 00:09:32.079 Host Read Commands: 33987 00:09:32.079 Host Write Commands: 32577 00:09:32.079 Controller Busy Time: 0 minutes 00:09:32.079 Power Cycles: 0 00:09:32.079 Power On Hours: 0 hours 00:09:32.079 Unsafe Shutdowns: 0 00:09:32.079 Unrecoverable Media Errors: 0 00:09:32.079 Lifetime Error Log Entries: 0 00:09:32.079 Warning Temperature Time: 0 minutes 00:09:32.079 Critical Temperature Time: 0 minutes 00:09:32.079 00:09:32.079 Number of Queues 00:09:32.079 ================ 00:09:32.079 Number of I/O Submission Queues: 64 00:09:32.079 Number of I/O Completion Queues: 64 00:09:32.079 00:09:32.079 ZNS Specific Controller Data 00:09:32.079 ============================ 00:09:32.079 Zone Append Size Limit: 0 00:09:32.079 00:09:32.079 00:09:32.079 Active Namespaces 00:09:32.079 ================= 00:09:32.079 Namespace ID:1 00:09:32.079 Error Recovery Timeout: Unlimited 00:09:32.079 Command Set Identifier: NVM (00h) 00:09:32.079 Deallocate: Supported 00:09:32.079 Deallocated/Unwritten Error: Supported 00:09:32.079 Deallocated Read Value: All 0x00 00:09:32.079 Deallocate in Write Zeroes: Not Supported 00:09:32.079 Deallocated Guard Field: 0xFFFF 00:09:32.079 Flush: Supported 00:09:32.079 Reservation: Not Supported 00:09:32.079 Namespace Sharing Capabilities: Multiple Controllers 00:09:32.079 Size (in LBAs): 262144 (1GiB) 00:09:32.079 Capacity (in LBAs): 262144 (1GiB) 00:09:32.079 Utilization (in LBAs): 262144 (1GiB) 00:09:32.079 Thin Provisioning: Not Supported 00:09:32.079 Per-NS Atomic Units: No 00:09:32.079 Maximum Single Source Range Length: 128 00:09:32.079 Maximum Copy Length: 128 00:09:32.079 Maximum Source Range Count: 128 00:09:32.079 NGUID/EUI64 Never Reused: No 00:09:32.079 Namespace Write Protected: No 00:09:32.079 Endurance group ID: 1 00:09:32.079 Number of LBA Formats: 8 00:09:32.079 Current LBA Format: LBA Format #04 00:09:32.079 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:32.079 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:32.079 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:32.079 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:32.079 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:32.079 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:32.079 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:32.079 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:32.079 00:09:32.079 Get Feature FDP: 00:09:32.079 ================ 00:09:32.079 Enabled: Yes 00:09:32.079 FDP configuration index: 0 00:09:32.079 00:09:32.079 FDP configurations log page 00:09:32.079 =========================== 00:09:32.079 Number of FDP configurations: 1 00:09:32.079 Version: 0 00:09:32.079 Size: 112 00:09:32.079 FDP Configuration Descriptor: 0 00:09:32.079 Descriptor Size: 96 00:09:32.079 Reclaim Group Identifier format: 2 00:09:32.079 FDP Volatile Write Cache: Not Present 00:09:32.079 FDP Configuration: Valid 00:09:32.079 Vendor Specific Size: 0 00:09:32.079 Number of Reclaim Groups: 2 00:09:32.079 Number of Recalim Unit Handles: 8 00:09:32.079 Max Placement Identifiers: 128 00:09:32.079 Number of Namespaces Suppprted: 256 00:09:32.079 Reclaim unit Nominal Size: 6000000 bytes 00:09:32.079 Estimated Reclaim Unit Time Limit: Not Reported 00:09:32.079 RUH Desc #000: RUH Type: Initially Isolated 00:09:32.079 RUH Desc #001: RUH Type: Initially Isolated 00:09:32.079 RUH Desc #002: RUH Type: Initially Isolated 00:09:32.079 RUH Desc #003: RUH Type: Initially Isolated 00:09:32.079 RUH Desc #004: RUH Type: Initially Isolated 00:09:32.079 RUH Desc #005: RUH Type: Initially Isolated 00:09:32.079 RUH Desc #006: RUH Type: Initially Isolated 00:09:32.079 RUH Desc #007: RUH Type: Initially Isolated 00:09:32.079 00:09:32.079 FDP reclaim unit handle usage log page 00:09:32.079 ====================================== 00:09:32.079 Number of Reclaim Unit Handles: 8 00:09:32.079 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:32.079 RUH Usage Desc #001: RUH Attributes: Unused 00:09:32.079 RUH Usage Desc #002: RUH Attributes: Unused 00:09:32.079 RUH Usage Desc #003: RUH Attributes: Unused 00:09:32.079 RUH Usage Desc #004: RUH Attributes: Unused 00:09:32.079 RU[2024-07-11 15:17:45.622586] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 69163 terminated unexpected 00:09:32.079 H Usage Desc #005: RUH Attributes: Unused 00:09:32.080 RUH Usage Desc #006: RUH Attributes: Unused 00:09:32.080 RUH Usage Desc #007: RUH Attributes: Unused 00:09:32.080 00:09:32.080 FDP statistics log page 00:09:32.080 ======================= 00:09:32.080 Host bytes with metadata written: 439787520 00:09:32.080 Media bytes with metadata written: 439853056 00:09:32.080 Media bytes erased: 0 00:09:32.080 00:09:32.080 FDP events log page 00:09:32.080 =================== 00:09:32.080 Number of FDP events: 0 00:09:32.080 00:09:32.080 NVM Specific Namespace Data 00:09:32.080 =========================== 00:09:32.080 Logical Block Storage Tag Mask: 0 00:09:32.080 Protection Information Capabilities: 00:09:32.080 16b Guard Protection Information Storage Tag Support: No 00:09:32.080 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:32.080 Storage Tag Check Read Support: No 00:09:32.080 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.080 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.080 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.080 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.080 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.080 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.080 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.080 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.080 ===================================================== 00:09:32.080 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:32.080 ===================================================== 00:09:32.080 Controller Capabilities/Features 00:09:32.080 ================================ 00:09:32.080 Vendor ID: 1b36 00:09:32.080 Subsystem Vendor ID: 1af4 00:09:32.080 Serial Number: 12340 00:09:32.080 Model Number: QEMU NVMe Ctrl 00:09:32.080 Firmware Version: 8.0.0 00:09:32.080 Recommended Arb Burst: 6 00:09:32.080 IEEE OUI Identifier: 00 54 52 00:09:32.080 Multi-path I/O 00:09:32.080 May have multiple subsystem ports: No 00:09:32.080 May have multiple controllers: No 00:09:32.080 Associated with SR-IOV VF: No 00:09:32.080 Max Data Transfer Size: 524288 00:09:32.080 Max Number of Namespaces: 256 00:09:32.080 Max Number of I/O Queues: 64 00:09:32.080 NVMe Specification Version (VS): 1.4 00:09:32.080 NVMe Specification Version (Identify): 1.4 00:09:32.080 Maximum Queue Entries: 2048 00:09:32.080 Contiguous Queues Required: Yes 00:09:32.080 Arbitration Mechanisms Supported 00:09:32.080 Weighted Round Robin: Not Supported 00:09:32.080 Vendor Specific: Not Supported 00:09:32.080 Reset Timeout: 7500 ms 00:09:32.080 Doorbell Stride: 4 bytes 00:09:32.080 NVM Subsystem Reset: Not Supported 00:09:32.080 Command Sets Supported 00:09:32.080 NVM Command Set: Supported 00:09:32.080 Boot Partition: Not Supported 00:09:32.080 Memory Page Size Minimum: 4096 bytes 00:09:32.080 Memory Page Size Maximum: 65536 bytes 00:09:32.080 Persistent Memory Region: Not Supported 00:09:32.080 Optional Asynchronous Events Supported 00:09:32.080 Namespace Attribute Notices: Supported 00:09:32.080 Firmware Activation Notices: Not Supported 00:09:32.080 ANA Change Notices: Not Supported 00:09:32.080 PLE Aggregate Log Change Notices: Not Supported 00:09:32.080 LBA Status Info Alert Notices: Not Supported 00:09:32.080 EGE Aggregate Log Change Notices: Not Supported 00:09:32.080 Normal NVM Subsystem Shutdown event: Not Supported 00:09:32.080 Zone Descriptor Change Notices: Not Supported 00:09:32.080 Discovery Log Change Notices: Not Supported 00:09:32.080 Controller Attributes 00:09:32.080 128-bit Host Identifier: Not Supported 00:09:32.080 Non-Operational Permissive Mode: Not Supported 00:09:32.080 NVM Sets: Not Supported 00:09:32.080 Read Recovery Levels: Not Supported 00:09:32.080 Endurance Groups: Not Supported 00:09:32.080 Predictable Latency Mode: Not Supported 00:09:32.080 Traffic Based Keep ALive: Not Supported 00:09:32.080 Namespace Granularity: Not Supported 00:09:32.080 SQ Associations: Not Supported 00:09:32.080 UUID List: Not Supported 00:09:32.080 Multi-Domain Subsystem: Not Supported 00:09:32.080 Fixed Capacity Management: Not Supported 00:09:32.080 Variable Capacity Management: Not Supported 00:09:32.080 Delete Endurance Group: Not Supported 00:09:32.080 Delete NVM Set: Not Supported 00:09:32.080 Extended LBA Formats Supported: Supported 00:09:32.080 Flexible Data Placement Supported: Not Supported 00:09:32.080 00:09:32.080 Controller Memory Buffer Support 00:09:32.080 ================================ 00:09:32.080 Supported: No 00:09:32.080 00:09:32.080 Persistent Memory Region Support 00:09:32.080 ================================ 00:09:32.080 Supported: No 00:09:32.080 00:09:32.080 Admin Command Set Attributes 00:09:32.080 ============================ 00:09:32.080 Security Send/Receive: Not Supported 00:09:32.080 Format NVM: Supported 00:09:32.080 Firmware Activate/Download: Not Supported 00:09:32.080 Namespace Management: Supported 00:09:32.080 Device Self-Test: Not Supported 00:09:32.080 Directives: Supported 00:09:32.080 NVMe-MI: Not Supported 00:09:32.080 Virtualization Management: Not Supported 00:09:32.080 Doorbell Buffer Config: Supported 00:09:32.080 Get LBA Status Capability: Not Supported 00:09:32.080 Command & Feature Lockdown Capability: Not Supported 00:09:32.080 Abort Command Limit: 4 00:09:32.080 Async Event Request Limit: 4 00:09:32.080 Number of Firmware Slots: N/A 00:09:32.080 Firmware Slot 1 Read-Only: N/A 00:09:32.080 Firmware Activation Without Reset: N/A 00:09:32.080 Multiple Update Detection Support: N/A 00:09:32.080 Firmware Update Granularity: No Information Provided 00:09:32.080 Per-Namespace SMART Log: Yes 00:09:32.080 Asymmetric Namespace Access Log Page: Not Supported 00:09:32.080 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:32.080 Command Effects Log Page: Supported 00:09:32.080 Get Log Page Extended Data: Supported 00:09:32.080 Telemetry Log Pages: Not Supported 00:09:32.080 Persistent Event Log Pages: Not Supported 00:09:32.080 Supported Log Pages Log Page: May Support 00:09:32.080 Commands Supported & Effects Log Page: Not Supported 00:09:32.080 Feature Identifiers & Effects Log Page:May Support 00:09:32.080 NVMe-MI Commands & Effects Log Page: May Support 00:09:32.080 Data Area 4 for Telemetry Log: Not Supported 00:09:32.080 Error Log Page Entries Supported: 1 00:09:32.080 Keep Alive: Not Supported 00:09:32.080 00:09:32.080 NVM Command Set Attributes 00:09:32.080 ========================== 00:09:32.081 Submission Queue Entry Size 00:09:32.081 Max: 64 00:09:32.081 Min: 64 00:09:32.081 Completion Queue Entry Size 00:09:32.081 Max: 16 00:09:32.081 Min: 16 00:09:32.081 Number of Namespaces: 256 00:09:32.081 Compare Command: Supported 00:09:32.081 Write Uncorrectable Command: Not Supported 00:09:32.081 Dataset Management Command: Supported 00:09:32.081 Write Zeroes Command: Supported 00:09:32.081 Set Features Save Field: Supported 00:09:32.081 Reservations: Not Supported 00:09:32.081 Timestamp: Supported 00:09:32.081 Copy: Supported 00:09:32.081 Volatile Write Cache: Present 00:09:32.081 Atomic Write Unit (Normal): 1 00:09:32.081 Atomic Write Unit (PFail): 1 00:09:32.081 Atomic Compare & Write Unit: 1 00:09:32.081 Fused Compare & Write: Not Supported 00:09:32.081 Scatter-Gather List 00:09:32.081 SGL Command Set: Supported 00:09:32.081 SGL Keyed: Not Supported 00:09:32.081 SGL Bit Bucket Descriptor: Not Supported 00:09:32.081 SGL Metadata Pointer: Not Supported 00:09:32.081 Oversized SGL: Not Supported 00:09:32.081 SGL Metadata Address: Not Supported 00:09:32.081 SGL Offset: Not Supported 00:09:32.081 Transport SGL Data Block: Not Supported 00:09:32.081 Replay Protected Memory Block: Not Supported 00:09:32.081 00:09:32.081 Firmware Slot Information 00:09:32.081 ========================= 00:09:32.081 Active slot: 1 00:09:32.081 Slot 1 Firmware Revision: 1.0 00:09:32.081 00:09:32.081 00:09:32.081 Commands Supported and Effects 00:09:32.081 ============================== 00:09:32.081 Admin Commands 00:09:32.081 -------------- 00:09:32.081 Delete I/O Submission Queue (00h): Supported 00:09:32.081 Create I/O Submission Queue (01h): Supported 00:09:32.081 Get Log Page (02h): Supported 00:09:32.081 Delete I/O Completion Queue (04h): Supported 00:09:32.081 Create I/O Completion Queue (05h): Supported 00:09:32.081 Identify (06h): Supported 00:09:32.081 Abort (08h): Supported 00:09:32.081 Set Features (09h): Supported 00:09:32.081 Get Features (0Ah): Supported 00:09:32.081 Asynchronous Event Request (0Ch): Supported 00:09:32.081 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:32.081 Directive Send (19h): Supported 00:09:32.081 Directive Receive (1Ah): Supported 00:09:32.081 Virtualization Management (1Ch): Supported 00:09:32.081 Doorbell Buffer Config (7Ch): Supported 00:09:32.081 Format NVM (80h): Supported LBA-Change 00:09:32.081 I/O Commands 00:09:32.081 ------------ 00:09:32.081 Flush (00h): Supported LBA-Change 00:09:32.081 Write (01h): Supported LBA-Change 00:09:32.081 Read (02h): Supported 00:09:32.081 Compare (05h): Supported 00:09:32.081 Write Zeroes (08h): Supported LBA-Change 00:09:32.081 Dataset Management (09h): Supported LBA-Change 00:09:32.081 Unknown (0Ch): Supported 00:09:32.081 Unknown (12h): Supported 00:09:32.081 Copy (19h): Supported LBA-Change 00:09:32.081 Unknown (1Dh): Supported LBA-Change 00:09:32.081 00:09:32.081 Error Log 00:09:32.081 ========= 00:09:32.081 00:09:32.081 Arbitration 00:09:32.081 =========== 00:09:32.081 Arbitration Burst: no limit 00:09:32.081 00:09:32.081 Power Management 00:09:32.081 ================ 00:09:32.081 Number of Power States: 1 00:09:32.081 Current Power State: Power State #0 00:09:32.081 Power State #0: 00:09:32.081 Max Power: 25.00 W 00:09:32.081 Non-Operational State: Operational 00:09:32.081 Entry Latency: 16 microseconds 00:09:32.081 Exit Latency: 4 microseconds 00:09:32.081 Relative Read Throughput: 0 00:09:32.081 Relative Read Latency: 0 00:09:32.081 Relative Write Throughput: 0 00:09:32.081 Relative Write Latency: 0 00:09:32.081 Idle Power: Not Reported 00:09:32.081 Active Power: Not Reported 00:09:32.081 Non-Operational Permissive Mode: Not Supported 00:09:32.081 00:09:32.081 Health Information 00:09:32.081 ================== 00:09:32.081 Critical Warnings: 00:09:32.081 Available Spare Space: OK 00:09:32.081 Temperature: OK 00:09:32.081 Device Reliability: OK 00:09:32.081 Read Only: No 00:09:32.081 Volatile Memory Backup: OK 00:09:32.081 Current Temperature: 323 Kelvin (50 Celsius) 00:09:32.081 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:32.081 Available Spare: 0% 00:09:32.081 Available Spare Threshold: 0% 00:09:32.081 Life Percentage Used: 0% 00:09:32.081 Data Units Read: 1025 00:09:32.081 Data Units Written: 857 00:09:32.081 Host Read Commands: 47881 00:09:32.081 Host Write Commands: 46364 00:09:32.081 Controller Busy Time: 0 minutes 00:09:32.081 Power Cycles: 0 00:09:32.081 Power On Hours: 0 hours 00:09:32.081 Unsafe Shutdowns: 0 00:09:32.081 Unrecoverable Media Errors: 0 00:09:32.081 Lifetime Error Log Entries: 0 00:09:32.081 Warning Temperature Time: 0 minutes 00:09:32.081 Critical Temperature Time: 0 minutes 00:09:32.081 00:09:32.081 Number of Queues 00:09:32.081 ================ 00:09:32.081 Number of I/O Submission Queues: 64 00:09:32.081 Number of I/O Completion Queues: 64 00:09:32.081 00:09:32.081 ZNS Specific Controller Data 00:09:32.081 ============================ 00:09:32.081 Zone Append Size Limit: 0 00:09:32.081 00:09:32.081 00:09:32.081 Active Namespaces 00:09:32.081 ================= 00:09:32.081 Namespace ID:1 00:09:32.081 Error Recovery Timeout: Unlimited 00:09:32.081 Command Set Identifier: NVM (00h) 00:09:32.081 Deallocate: Supported 00:09:32.081 Deallocated/Unwritten Error: Supported 00:09:32.081 Deallocated Read Value: All 0x00 00:09:32.081 Deallocate in Write Zeroes: Not Supported 00:09:32.081 Deallocated Guard Field: 0xFFFF 00:09:32.081 Flush: Supported 00:09:32.081 Reservation: Not Supported 00:09:32.081 Metadata Transferred as: Separate Metadata Buffer 00:09:32.081 Namespace Sharing Capabilities: Private 00:09:32.081 Size (in LBAs): 1548666 (5GiB) 00:09:32.081 Capacity (in LBAs): 1548666 (5GiB) 00:09:32.081 Utilization (in LBAs): 1548666 (5GiB) 00:09:32.081 Thin Provisioning: Not Supported 00:09:32.081 Per-NS Atomic Units: No 00:09:32.081 Maximum Single Source Range Length: 128 00:09:32.081 Maximum Copy Length: 128 00:09:32.081 Maximum Source Range Count: 128 00:09:32.081 NGUID/EUI64 Never Reused: No 00:09:32.081 Namespace Write Protected: No 00:09:32.081 Number of LBA Formats: 8 00:09:32.081 Current LBA Format: LBA Format #07 00:09:32.081 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:32.081 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:32.081 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:32.081 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:32.081 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:32.081 LBA Forma[2024-07-11 15:17:45.623523] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 69163 terminated unexpected 00:09:32.081 t #05: Data Size: 4096 Metadata Size: 8 00:09:32.081 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:32.081 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:32.081 00:09:32.081 NVM Specific Namespace Data 00:09:32.081 =========================== 00:09:32.081 Logical Block Storage Tag Mask: 0 00:09:32.081 Protection Information Capabilities: 00:09:32.081 16b Guard Protection Information Storage Tag Support: No 00:09:32.081 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:32.081 Storage Tag Check Read Support: No 00:09:32.081 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.081 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.081 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.081 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.081 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.081 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.082 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.082 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.082 ===================================================== 00:09:32.082 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:32.082 ===================================================== 00:09:32.082 Controller Capabilities/Features 00:09:32.082 ================================ 00:09:32.082 Vendor ID: 1b36 00:09:32.082 Subsystem Vendor ID: 1af4 00:09:32.082 Serial Number: 12342 00:09:32.082 Model Number: QEMU NVMe Ctrl 00:09:32.082 Firmware Version: 8.0.0 00:09:32.082 Recommended Arb Burst: 6 00:09:32.082 IEEE OUI Identifier: 00 54 52 00:09:32.082 Multi-path I/O 00:09:32.082 May have multiple subsystem ports: No 00:09:32.082 May have multiple controllers: No 00:09:32.082 Associated with SR-IOV VF: No 00:09:32.082 Max Data Transfer Size: 524288 00:09:32.082 Max Number of Namespaces: 256 00:09:32.082 Max Number of I/O Queues: 64 00:09:32.082 NVMe Specification Version (VS): 1.4 00:09:32.082 NVMe Specification Version (Identify): 1.4 00:09:32.082 Maximum Queue Entries: 2048 00:09:32.082 Contiguous Queues Required: Yes 00:09:32.082 Arbitration Mechanisms Supported 00:09:32.082 Weighted Round Robin: Not Supported 00:09:32.082 Vendor Specific: Not Supported 00:09:32.082 Reset Timeout: 7500 ms 00:09:32.082 Doorbell Stride: 4 bytes 00:09:32.082 NVM Subsystem Reset: Not Supported 00:09:32.082 Command Sets Supported 00:09:32.082 NVM Command Set: Supported 00:09:32.082 Boot Partition: Not Supported 00:09:32.082 Memory Page Size Minimum: 4096 bytes 00:09:32.082 Memory Page Size Maximum: 65536 bytes 00:09:32.082 Persistent Memory Region: Not Supported 00:09:32.082 Optional Asynchronous Events Supported 00:09:32.082 Namespace Attribute Notices: Supported 00:09:32.082 Firmware Activation Notices: Not Supported 00:09:32.082 ANA Change Notices: Not Supported 00:09:32.082 PLE Aggregate Log Change Notices: Not Supported 00:09:32.082 LBA Status Info Alert Notices: Not Supported 00:09:32.082 EGE Aggregate Log Change Notices: Not Supported 00:09:32.082 Normal NVM Subsystem Shutdown event: Not Supported 00:09:32.082 Zone Descriptor Change Notices: Not Supported 00:09:32.082 Discovery Log Change Notices: Not Supported 00:09:32.082 Controller Attributes 00:09:32.082 128-bit Host Identifier: Not Supported 00:09:32.082 Non-Operational Permissive Mode: Not Supported 00:09:32.082 NVM Sets: Not Supported 00:09:32.082 Read Recovery Levels: Not Supported 00:09:32.082 Endurance Groups: Not Supported 00:09:32.082 Predictable Latency Mode: Not Supported 00:09:32.082 Traffic Based Keep ALive: Not Supported 00:09:32.082 Namespace Granularity: Not Supported 00:09:32.082 SQ Associations: Not Supported 00:09:32.082 UUID List: Not Supported 00:09:32.082 Multi-Domain Subsystem: Not Supported 00:09:32.082 Fixed Capacity Management: Not Supported 00:09:32.082 Variable Capacity Management: Not Supported 00:09:32.082 Delete Endurance Group: Not Supported 00:09:32.082 Delete NVM Set: Not Supported 00:09:32.082 Extended LBA Formats Supported: Supported 00:09:32.082 Flexible Data Placement Supported: Not Supported 00:09:32.082 00:09:32.082 Controller Memory Buffer Support 00:09:32.082 ================================ 00:09:32.082 Supported: No 00:09:32.082 00:09:32.082 Persistent Memory Region Support 00:09:32.082 ================================ 00:09:32.082 Supported: No 00:09:32.082 00:09:32.082 Admin Command Set Attributes 00:09:32.082 ============================ 00:09:32.082 Security Send/Receive: Not Supported 00:09:32.082 Format NVM: Supported 00:09:32.082 Firmware Activate/Download: Not Supported 00:09:32.082 Namespace Management: Supported 00:09:32.082 Device Self-Test: Not Supported 00:09:32.082 Directives: Supported 00:09:32.082 NVMe-MI: Not Supported 00:09:32.082 Virtualization Management: Not Supported 00:09:32.082 Doorbell Buffer Config: Supported 00:09:32.082 Get LBA Status Capability: Not Supported 00:09:32.082 Command & Feature Lockdown Capability: Not Supported 00:09:32.082 Abort Command Limit: 4 00:09:32.082 Async Event Request Limit: 4 00:09:32.082 Number of Firmware Slots: N/A 00:09:32.082 Firmware Slot 1 Read-Only: N/A 00:09:32.082 Firmware Activation Without Reset: N/A 00:09:32.082 Multiple Update Detection Support: N/A 00:09:32.082 Firmware Update Granularity: No Information Provided 00:09:32.082 Per-Namespace SMART Log: Yes 00:09:32.082 Asymmetric Namespace Access Log Page: Not Supported 00:09:32.082 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:32.082 Command Effects Log Page: Supported 00:09:32.082 Get Log Page Extended Data: Supported 00:09:32.082 Telemetry Log Pages: Not Supported 00:09:32.082 Persistent Event Log Pages: Not Supported 00:09:32.082 Supported Log Pages Log Page: May Support 00:09:32.082 Commands Supported & Effects Log Page: Not Supported 00:09:32.082 Feature Identifiers & Effects Log Page:May Support 00:09:32.082 NVMe-MI Commands & Effects Log Page: May Support 00:09:32.083 Data Area 4 for Telemetry Log: Not Supported 00:09:32.083 Error Log Page Entries Supported: 1 00:09:32.083 Keep Alive: Not Supported 00:09:32.083 00:09:32.083 NVM Command Set Attributes 00:09:32.083 ========================== 00:09:32.083 Submission Queue Entry Size 00:09:32.083 Max: 64 00:09:32.083 Min: 64 00:09:32.083 Completion Queue Entry Size 00:09:32.083 Max: 16 00:09:32.083 Min: 16 00:09:32.083 Number of Namespaces: 256 00:09:32.083 Compare Command: Supported 00:09:32.083 Write Uncorrectable Command: Not Supported 00:09:32.083 Dataset Management Command: Supported 00:09:32.083 Write Zeroes Command: Supported 00:09:32.083 Set Features Save Field: Supported 00:09:32.083 Reservations: Not Supported 00:09:32.083 Timestamp: Supported 00:09:32.083 Copy: Supported 00:09:32.083 Volatile Write Cache: Present 00:09:32.083 Atomic Write Unit (Normal): 1 00:09:32.083 Atomic Write Unit (PFail): 1 00:09:32.083 Atomic Compare & Write Unit: 1 00:09:32.083 Fused Compare & Write: Not Supported 00:09:32.083 Scatter-Gather List 00:09:32.083 SGL Command Set: Supported 00:09:32.083 SGL Keyed: Not Supported 00:09:32.083 SGL Bit Bucket Descriptor: Not Supported 00:09:32.083 SGL Metadata Pointer: Not Supported 00:09:32.083 Oversized SGL: Not Supported 00:09:32.083 SGL Metadata Address: Not Supported 00:09:32.083 SGL Offset: Not Supported 00:09:32.083 Transport SGL Data Block: Not Supported 00:09:32.083 Replay Protected Memory Block: Not Supported 00:09:32.083 00:09:32.083 Firmware Slot Information 00:09:32.083 ========================= 00:09:32.083 Active slot: 1 00:09:32.083 Slot 1 Firmware Revision: 1.0 00:09:32.083 00:09:32.083 00:09:32.083 Commands Supported and Effects 00:09:32.083 ============================== 00:09:32.083 Admin Commands 00:09:32.083 -------------- 00:09:32.083 Delete I/O Submission Queue (00h): Supported 00:09:32.083 Create I/O Submission Queue (01h): Supported 00:09:32.083 Get Log Page (02h): Supported 00:09:32.083 Delete I/O Completion Queue (04h): Supported 00:09:32.083 Create I/O Completion Queue (05h): Supported 00:09:32.083 Identify (06h): Supported 00:09:32.083 Abort (08h): Supported 00:09:32.083 Set Features (09h): Supported 00:09:32.083 Get Features (0Ah): Supported 00:09:32.083 Asynchronous Event Request (0Ch): Supported 00:09:32.083 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:32.083 Directive Send (19h): Supported 00:09:32.083 Directive Receive (1Ah): Supported 00:09:32.083 Virtualization Management (1Ch): Supported 00:09:32.083 Doorbell Buffer Config (7Ch): Supported 00:09:32.083 Format NVM (80h): Supported LBA-Change 00:09:32.083 I/O Commands 00:09:32.083 ------------ 00:09:32.083 Flush (00h): Supported LBA-Change 00:09:32.083 Write (01h): Supported LBA-Change 00:09:32.083 Read (02h): Supported 00:09:32.083 Compare (05h): Supported 00:09:32.083 Write Zeroes (08h): Supported LBA-Change 00:09:32.083 Dataset Management (09h): Supported LBA-Change 00:09:32.083 Unknown (0Ch): Supported 00:09:32.083 Unknown (12h): Supported 00:09:32.083 Copy (19h): Supported LBA-Change 00:09:32.083 Unknown (1Dh): Supported LBA-Change 00:09:32.083 00:09:32.083 Error Log 00:09:32.083 ========= 00:09:32.083 00:09:32.083 Arbitration 00:09:32.083 =========== 00:09:32.083 Arbitration Burst: no limit 00:09:32.083 00:09:32.083 Power Management 00:09:32.083 ================ 00:09:32.083 Number of Power States: 1 00:09:32.083 Current Power State: Power State #0 00:09:32.083 Power State #0: 00:09:32.083 Max Power: 25.00 W 00:09:32.083 Non-Operational State: Operational 00:09:32.083 Entry Latency: 16 microseconds 00:09:32.083 Exit Latency: 4 microseconds 00:09:32.083 Relative Read Throughput: 0 00:09:32.083 Relative Read Latency: 0 00:09:32.083 Relative Write Throughput: 0 00:09:32.083 Relative Write Latency: 0 00:09:32.083 Idle Power: Not Reported 00:09:32.083 Active Power: Not Reported 00:09:32.083 Non-Operational Permissive Mode: Not Supported 00:09:32.083 00:09:32.083 Health Information 00:09:32.083 ================== 00:09:32.083 Critical Warnings: 00:09:32.083 Available Spare Space: OK 00:09:32.083 Temperature: OK 00:09:32.083 Device Reliability: OK 00:09:32.083 Read Only: No 00:09:32.083 Volatile Memory Backup: OK 00:09:32.083 Current Temperature: 323 Kelvin (50 Celsius) 00:09:32.083 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:32.083 Available Spare: 0% 00:09:32.083 Available Spare Threshold: 0% 00:09:32.083 Life Percentage Used: 0% 00:09:32.083 Data Units Read: 2177 00:09:32.083 Data Units Written: 1857 00:09:32.083 Host Read Commands: 100033 00:09:32.083 Host Write Commands: 95803 00:09:32.083 Controller Busy Time: 0 minutes 00:09:32.083 Power Cycles: 0 00:09:32.083 Power On Hours: 0 hours 00:09:32.083 Unsafe Shutdowns: 0 00:09:32.083 Unrecoverable Media Errors: 0 00:09:32.083 Lifetime Error Log Entries: 0 00:09:32.083 Warning Temperature Time: 0 minutes 00:09:32.083 Critical Temperature Time: 0 minutes 00:09:32.083 00:09:32.083 Number of Queues 00:09:32.083 ================ 00:09:32.083 Number of I/O Submission Queues: 64 00:09:32.083 Number of I/O Completion Queues: 64 00:09:32.083 00:09:32.083 ZNS Specific Controller Data 00:09:32.083 ============================ 00:09:32.083 Zone Append Size Limit: 0 00:09:32.083 00:09:32.083 00:09:32.083 Active Namespaces 00:09:32.083 ================= 00:09:32.083 Namespace ID:1 00:09:32.083 Error Recovery Timeout: Unlimited 00:09:32.083 Command Set Identifier: NVM (00h) 00:09:32.083 Deallocate: Supported 00:09:32.083 Deallocated/Unwritten Error: Supported 00:09:32.083 Deallocated Read Value: All 0x00 00:09:32.083 Deallocate in Write Zeroes: Not Supported 00:09:32.083 Deallocated Guard Field: 0xFFFF 00:09:32.083 Flush: Supported 00:09:32.083 Reservation: Not Supported 00:09:32.083 Namespace Sharing Capabilities: Private 00:09:32.083 Size (in LBAs): 1048576 (4GiB) 00:09:32.083 Capacity (in LBAs): 1048576 (4GiB) 00:09:32.083 Utilization (in LBAs): 1048576 (4GiB) 00:09:32.083 Thin Provisioning: Not Supported 00:09:32.083 Per-NS Atomic Units: No 00:09:32.083 Maximum Single Source Range Length: 128 00:09:32.083 Maximum Copy Length: 128 00:09:32.083 Maximum Source Range Count: 128 00:09:32.083 NGUID/EUI64 Never Reused: No 00:09:32.083 Namespace Write Protected: No 00:09:32.083 Number of LBA Formats: 8 00:09:32.083 Current LBA Format: LBA Format #04 00:09:32.083 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:32.083 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:32.083 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:32.083 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:32.083 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:32.083 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:32.083 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:32.083 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:32.083 00:09:32.083 NVM Specific Namespace Data 00:09:32.083 =========================== 00:09:32.083 Logical Block Storage Tag Mask: 0 00:09:32.083 Protection Information Capabilities: 00:09:32.083 16b Guard Protection Information Storage Tag Support: No 00:09:32.083 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:32.083 Storage Tag Check Read Support: No 00:09:32.083 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.083 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.083 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.084 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.084 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.084 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.084 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.084 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.084 Namespace ID:2 00:09:32.084 Error Recovery Timeout: Unlimited 00:09:32.084 Command Set Identifier: NVM (00h) 00:09:32.084 Deallocate: Supported 00:09:32.084 Deallocated/Unwritten Error: Supported 00:09:32.084 Deallocated Read Value: All 0x00 00:09:32.084 Deallocate in Write Zeroes: Not Supported 00:09:32.084 Deallocated Guard Field: 0xFFFF 00:09:32.084 Flush: Supported 00:09:32.084 Reservation: Not Supported 00:09:32.084 Namespace Sharing Capabilities: Private 00:09:32.084 Size (in LBAs): 1048576 (4GiB) 00:09:32.084 Capacity (in LBAs): 1048576 (4GiB) 00:09:32.084 Utilization (in LBAs): 1048576 (4GiB) 00:09:32.084 Thin Provisioning: Not Supported 00:09:32.084 Per-NS Atomic Units: No 00:09:32.084 Maximum Single Source Range Length: 128 00:09:32.084 Maximum Copy Length: 128 00:09:32.084 Maximum Source Range Count: 128 00:09:32.084 NGUID/EUI64 Never Reused: No 00:09:32.084 Namespace Write Protected: No 00:09:32.084 Number of LBA Formats: 8 00:09:32.084 Current LBA Format: LBA Format #04 00:09:32.084 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:32.084 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:32.084 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:32.084 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:32.084 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:32.084 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:32.084 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:32.084 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:32.084 00:09:32.084 NVM Specific Namespace Data 00:09:32.084 =========================== 00:09:32.084 Logical Block Storage Tag Mask: 0 00:09:32.084 Protection Information Capabilities: 00:09:32.084 16b Guard Protection Information Storage Tag Support: No 00:09:32.084 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:32.084 Storage Tag Check Read Support: No 00:09:32.084 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.084 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.084 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.084 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.084 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.084 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.084 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.084 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.084 Namespace ID:3 00:09:32.084 Error Recovery Timeout: Unlimited 00:09:32.084 Command Set Identifier: NVM (00h) 00:09:32.084 Deallocate: Supported 00:09:32.084 Deallocated/Unwritten Error: Supported 00:09:32.084 Deallocated Read Value: All 0x00 00:09:32.084 Deallocate in Write Zeroes: Not Supported 00:09:32.084 Deallocated Guard Field: 0xFFFF 00:09:32.084 Flush: Supported 00:09:32.084 Reservation: Not Supported 00:09:32.084 Namespace Sharing Capabilities: Private 00:09:32.084 Size (in LBAs): 1048576 (4GiB) 00:09:32.084 Capacity (in LBAs): 1048576 (4GiB) 00:09:32.084 Utilization (in LBAs): 1048576 (4GiB) 00:09:32.084 Thin Provisioning: Not Supported 00:09:32.084 Per-NS Atomic Units: No 00:09:32.084 Maximum Single Source Range Length: 128 00:09:32.084 Maximum Copy Length: 128 00:09:32.084 Maximum Source Range Count: 128 00:09:32.084 NGUID/EUI64 Never Reused: No 00:09:32.084 Namespace Write Protected: No 00:09:32.084 Number of LBA Formats: 8 00:09:32.084 Current LBA Format: LBA Format #04 00:09:32.084 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:32.084 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:32.084 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:32.084 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:32.084 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:32.084 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:32.084 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:32.084 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:32.084 00:09:32.084 NVM Specific Namespace Data 00:09:32.084 =========================== 00:09:32.084 Logical Block Storage Tag Mask: 0 00:09:32.084 Protection Information Capabilities: 00:09:32.084 16b Guard Protection Information Storage Tag Support: No 00:09:32.084 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:32.084 Storage Tag Check Read Support: No 00:09:32.084 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.084 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.084 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.084 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.084 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.084 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.084 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.084 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.084 15:17:45 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:32.084 15:17:45 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:32.343 ===================================================== 00:09:32.343 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:32.343 ===================================================== 00:09:32.343 Controller Capabilities/Features 00:09:32.343 ================================ 00:09:32.343 Vendor ID: 1b36 00:09:32.343 Subsystem Vendor ID: 1af4 00:09:32.343 Serial Number: 12340 00:09:32.343 Model Number: QEMU NVMe Ctrl 00:09:32.343 Firmware Version: 8.0.0 00:09:32.343 Recommended Arb Burst: 6 00:09:32.343 IEEE OUI Identifier: 00 54 52 00:09:32.343 Multi-path I/O 00:09:32.343 May have multiple subsystem ports: No 00:09:32.343 May have multiple controllers: No 00:09:32.343 Associated with SR-IOV VF: No 00:09:32.343 Max Data Transfer Size: 524288 00:09:32.343 Max Number of Namespaces: 256 00:09:32.343 Max Number of I/O Queues: 64 00:09:32.343 NVMe Specification Version (VS): 1.4 00:09:32.343 NVMe Specification Version (Identify): 1.4 00:09:32.343 Maximum Queue Entries: 2048 00:09:32.343 Contiguous Queues Required: Yes 00:09:32.343 Arbitration Mechanisms Supported 00:09:32.343 Weighted Round Robin: Not Supported 00:09:32.343 Vendor Specific: Not Supported 00:09:32.343 Reset Timeout: 7500 ms 00:09:32.343 Doorbell Stride: 4 bytes 00:09:32.343 NVM Subsystem Reset: Not Supported 00:09:32.343 Command Sets Supported 00:09:32.343 NVM Command Set: Supported 00:09:32.343 Boot Partition: Not Supported 00:09:32.343 Memory Page Size Minimum: 4096 bytes 00:09:32.343 Memory Page Size Maximum: 65536 bytes 00:09:32.343 Persistent Memory Region: Not Supported 00:09:32.343 Optional Asynchronous Events Supported 00:09:32.343 Namespace Attribute Notices: Supported 00:09:32.343 Firmware Activation Notices: Not Supported 00:09:32.343 ANA Change Notices: Not Supported 00:09:32.343 PLE Aggregate Log Change Notices: Not Supported 00:09:32.343 LBA Status Info Alert Notices: Not Supported 00:09:32.343 EGE Aggregate Log Change Notices: Not Supported 00:09:32.343 Normal NVM Subsystem Shutdown event: Not Supported 00:09:32.343 Zone Descriptor Change Notices: Not Supported 00:09:32.344 Discovery Log Change Notices: Not Supported 00:09:32.344 Controller Attributes 00:09:32.344 128-bit Host Identifier: Not Supported 00:09:32.344 Non-Operational Permissive Mode: Not Supported 00:09:32.344 NVM Sets: Not Supported 00:09:32.344 Read Recovery Levels: Not Supported 00:09:32.344 Endurance Groups: Not Supported 00:09:32.344 Predictable Latency Mode: Not Supported 00:09:32.344 Traffic Based Keep ALive: Not Supported 00:09:32.344 Namespace Granularity: Not Supported 00:09:32.344 SQ Associations: Not Supported 00:09:32.344 UUID List: Not Supported 00:09:32.344 Multi-Domain Subsystem: Not Supported 00:09:32.344 Fixed Capacity Management: Not Supported 00:09:32.344 Variable Capacity Management: Not Supported 00:09:32.344 Delete Endurance Group: Not Supported 00:09:32.344 Delete NVM Set: Not Supported 00:09:32.344 Extended LBA Formats Supported: Supported 00:09:32.344 Flexible Data Placement Supported: Not Supported 00:09:32.344 00:09:32.344 Controller Memory Buffer Support 00:09:32.344 ================================ 00:09:32.344 Supported: No 00:09:32.344 00:09:32.344 Persistent Memory Region Support 00:09:32.344 ================================ 00:09:32.344 Supported: No 00:09:32.344 00:09:32.344 Admin Command Set Attributes 00:09:32.344 ============================ 00:09:32.344 Security Send/Receive: Not Supported 00:09:32.344 Format NVM: Supported 00:09:32.344 Firmware Activate/Download: Not Supported 00:09:32.344 Namespace Management: Supported 00:09:32.344 Device Self-Test: Not Supported 00:09:32.344 Directives: Supported 00:09:32.344 NVMe-MI: Not Supported 00:09:32.344 Virtualization Management: Not Supported 00:09:32.344 Doorbell Buffer Config: Supported 00:09:32.344 Get LBA Status Capability: Not Supported 00:09:32.344 Command & Feature Lockdown Capability: Not Supported 00:09:32.344 Abort Command Limit: 4 00:09:32.344 Async Event Request Limit: 4 00:09:32.344 Number of Firmware Slots: N/A 00:09:32.344 Firmware Slot 1 Read-Only: N/A 00:09:32.344 Firmware Activation Without Reset: N/A 00:09:32.344 Multiple Update Detection Support: N/A 00:09:32.344 Firmware Update Granularity: No Information Provided 00:09:32.344 Per-Namespace SMART Log: Yes 00:09:32.344 Asymmetric Namespace Access Log Page: Not Supported 00:09:32.344 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:32.344 Command Effects Log Page: Supported 00:09:32.344 Get Log Page Extended Data: Supported 00:09:32.344 Telemetry Log Pages: Not Supported 00:09:32.344 Persistent Event Log Pages: Not Supported 00:09:32.344 Supported Log Pages Log Page: May Support 00:09:32.344 Commands Supported & Effects Log Page: Not Supported 00:09:32.344 Feature Identifiers & Effects Log Page:May Support 00:09:32.344 NVMe-MI Commands & Effects Log Page: May Support 00:09:32.344 Data Area 4 for Telemetry Log: Not Supported 00:09:32.344 Error Log Page Entries Supported: 1 00:09:32.344 Keep Alive: Not Supported 00:09:32.344 00:09:32.344 NVM Command Set Attributes 00:09:32.344 ========================== 00:09:32.344 Submission Queue Entry Size 00:09:32.344 Max: 64 00:09:32.344 Min: 64 00:09:32.344 Completion Queue Entry Size 00:09:32.344 Max: 16 00:09:32.344 Min: 16 00:09:32.344 Number of Namespaces: 256 00:09:32.344 Compare Command: Supported 00:09:32.344 Write Uncorrectable Command: Not Supported 00:09:32.344 Dataset Management Command: Supported 00:09:32.344 Write Zeroes Command: Supported 00:09:32.344 Set Features Save Field: Supported 00:09:32.344 Reservations: Not Supported 00:09:32.344 Timestamp: Supported 00:09:32.344 Copy: Supported 00:09:32.344 Volatile Write Cache: Present 00:09:32.344 Atomic Write Unit (Normal): 1 00:09:32.344 Atomic Write Unit (PFail): 1 00:09:32.344 Atomic Compare & Write Unit: 1 00:09:32.344 Fused Compare & Write: Not Supported 00:09:32.344 Scatter-Gather List 00:09:32.344 SGL Command Set: Supported 00:09:32.344 SGL Keyed: Not Supported 00:09:32.344 SGL Bit Bucket Descriptor: Not Supported 00:09:32.344 SGL Metadata Pointer: Not Supported 00:09:32.344 Oversized SGL: Not Supported 00:09:32.344 SGL Metadata Address: Not Supported 00:09:32.344 SGL Offset: Not Supported 00:09:32.344 Transport SGL Data Block: Not Supported 00:09:32.344 Replay Protected Memory Block: Not Supported 00:09:32.344 00:09:32.344 Firmware Slot Information 00:09:32.344 ========================= 00:09:32.344 Active slot: 1 00:09:32.344 Slot 1 Firmware Revision: 1.0 00:09:32.344 00:09:32.344 00:09:32.344 Commands Supported and Effects 00:09:32.344 ============================== 00:09:32.344 Admin Commands 00:09:32.344 -------------- 00:09:32.344 Delete I/O Submission Queue (00h): Supported 00:09:32.344 Create I/O Submission Queue (01h): Supported 00:09:32.344 Get Log Page (02h): Supported 00:09:32.344 Delete I/O Completion Queue (04h): Supported 00:09:32.344 Create I/O Completion Queue (05h): Supported 00:09:32.344 Identify (06h): Supported 00:09:32.344 Abort (08h): Supported 00:09:32.344 Set Features (09h): Supported 00:09:32.344 Get Features (0Ah): Supported 00:09:32.344 Asynchronous Event Request (0Ch): Supported 00:09:32.344 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:32.344 Directive Send (19h): Supported 00:09:32.344 Directive Receive (1Ah): Supported 00:09:32.344 Virtualization Management (1Ch): Supported 00:09:32.344 Doorbell Buffer Config (7Ch): Supported 00:09:32.344 Format NVM (80h): Supported LBA-Change 00:09:32.344 I/O Commands 00:09:32.344 ------------ 00:09:32.344 Flush (00h): Supported LBA-Change 00:09:32.344 Write (01h): Supported LBA-Change 00:09:32.344 Read (02h): Supported 00:09:32.344 Compare (05h): Supported 00:09:32.344 Write Zeroes (08h): Supported LBA-Change 00:09:32.344 Dataset Management (09h): Supported LBA-Change 00:09:32.344 Unknown (0Ch): Supported 00:09:32.344 Unknown (12h): Supported 00:09:32.344 Copy (19h): Supported LBA-Change 00:09:32.344 Unknown (1Dh): Supported LBA-Change 00:09:32.344 00:09:32.344 Error Log 00:09:32.344 ========= 00:09:32.344 00:09:32.344 Arbitration 00:09:32.344 =========== 00:09:32.344 Arbitration Burst: no limit 00:09:32.344 00:09:32.344 Power Management 00:09:32.344 ================ 00:09:32.344 Number of Power States: 1 00:09:32.344 Current Power State: Power State #0 00:09:32.344 Power State #0: 00:09:32.344 Max Power: 25.00 W 00:09:32.344 Non-Operational State: Operational 00:09:32.344 Entry Latency: 16 microseconds 00:09:32.344 Exit Latency: 4 microseconds 00:09:32.344 Relative Read Throughput: 0 00:09:32.344 Relative Read Latency: 0 00:09:32.344 Relative Write Throughput: 0 00:09:32.344 Relative Write Latency: 0 00:09:32.603 Idle Power: Not Reported 00:09:32.603 Active Power: Not Reported 00:09:32.603 Non-Operational Permissive Mode: Not Supported 00:09:32.603 00:09:32.603 Health Information 00:09:32.603 ================== 00:09:32.603 Critical Warnings: 00:09:32.603 Available Spare Space: OK 00:09:32.603 Temperature: OK 00:09:32.603 Device Reliability: OK 00:09:32.603 Read Only: No 00:09:32.603 Volatile Memory Backup: OK 00:09:32.603 Current Temperature: 323 Kelvin (50 Celsius) 00:09:32.603 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:32.603 Available Spare: 0% 00:09:32.603 Available Spare Threshold: 0% 00:09:32.603 Life Percentage Used: 0% 00:09:32.603 Data Units Read: 1025 00:09:32.603 Data Units Written: 857 00:09:32.603 Host Read Commands: 47881 00:09:32.603 Host Write Commands: 46364 00:09:32.603 Controller Busy Time: 0 minutes 00:09:32.603 Power Cycles: 0 00:09:32.603 Power On Hours: 0 hours 00:09:32.603 Unsafe Shutdowns: 0 00:09:32.603 Unrecoverable Media Errors: 0 00:09:32.603 Lifetime Error Log Entries: 0 00:09:32.603 Warning Temperature Time: 0 minutes 00:09:32.603 Critical Temperature Time: 0 minutes 00:09:32.603 00:09:32.603 Number of Queues 00:09:32.603 ================ 00:09:32.603 Number of I/O Submission Queues: 64 00:09:32.603 Number of I/O Completion Queues: 64 00:09:32.603 00:09:32.603 ZNS Specific Controller Data 00:09:32.603 ============================ 00:09:32.603 Zone Append Size Limit: 0 00:09:32.603 00:09:32.603 00:09:32.603 Active Namespaces 00:09:32.603 ================= 00:09:32.603 Namespace ID:1 00:09:32.603 Error Recovery Timeout: Unlimited 00:09:32.603 Command Set Identifier: NVM (00h) 00:09:32.603 Deallocate: Supported 00:09:32.603 Deallocated/Unwritten Error: Supported 00:09:32.603 Deallocated Read Value: All 0x00 00:09:32.603 Deallocate in Write Zeroes: Not Supported 00:09:32.603 Deallocated Guard Field: 0xFFFF 00:09:32.603 Flush: Supported 00:09:32.603 Reservation: Not Supported 00:09:32.603 Metadata Transferred as: Separate Metadata Buffer 00:09:32.603 Namespace Sharing Capabilities: Private 00:09:32.603 Size (in LBAs): 1548666 (5GiB) 00:09:32.603 Capacity (in LBAs): 1548666 (5GiB) 00:09:32.603 Utilization (in LBAs): 1548666 (5GiB) 00:09:32.603 Thin Provisioning: Not Supported 00:09:32.603 Per-NS Atomic Units: No 00:09:32.603 Maximum Single Source Range Length: 128 00:09:32.603 Maximum Copy Length: 128 00:09:32.603 Maximum Source Range Count: 128 00:09:32.603 NGUID/EUI64 Never Reused: No 00:09:32.604 Namespace Write Protected: No 00:09:32.604 Number of LBA Formats: 8 00:09:32.604 Current LBA Format: LBA Format #07 00:09:32.604 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:32.604 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:32.604 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:32.604 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:32.604 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:32.604 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:32.604 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:32.604 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:32.604 00:09:32.604 NVM Specific Namespace Data 00:09:32.604 =========================== 00:09:32.604 Logical Block Storage Tag Mask: 0 00:09:32.604 Protection Information Capabilities: 00:09:32.604 16b Guard Protection Information Storage Tag Support: No 00:09:32.604 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:32.604 Storage Tag Check Read Support: No 00:09:32.604 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.604 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.604 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.604 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.604 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.604 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.604 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.604 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.604 15:17:45 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:32.604 15:17:45 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:32.863 ===================================================== 00:09:32.863 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:32.863 ===================================================== 00:09:32.863 Controller Capabilities/Features 00:09:32.863 ================================ 00:09:32.863 Vendor ID: 1b36 00:09:32.863 Subsystem Vendor ID: 1af4 00:09:32.863 Serial Number: 12341 00:09:32.863 Model Number: QEMU NVMe Ctrl 00:09:32.863 Firmware Version: 8.0.0 00:09:32.863 Recommended Arb Burst: 6 00:09:32.863 IEEE OUI Identifier: 00 54 52 00:09:32.863 Multi-path I/O 00:09:32.863 May have multiple subsystem ports: No 00:09:32.863 May have multiple controllers: No 00:09:32.863 Associated with SR-IOV VF: No 00:09:32.863 Max Data Transfer Size: 524288 00:09:32.863 Max Number of Namespaces: 256 00:09:32.863 Max Number of I/O Queues: 64 00:09:32.863 NVMe Specification Version (VS): 1.4 00:09:32.863 NVMe Specification Version (Identify): 1.4 00:09:32.863 Maximum Queue Entries: 2048 00:09:32.863 Contiguous Queues Required: Yes 00:09:32.863 Arbitration Mechanisms Supported 00:09:32.863 Weighted Round Robin: Not Supported 00:09:32.863 Vendor Specific: Not Supported 00:09:32.863 Reset Timeout: 7500 ms 00:09:32.863 Doorbell Stride: 4 bytes 00:09:32.863 NVM Subsystem Reset: Not Supported 00:09:32.863 Command Sets Supported 00:09:32.863 NVM Command Set: Supported 00:09:32.863 Boot Partition: Not Supported 00:09:32.863 Memory Page Size Minimum: 4096 bytes 00:09:32.863 Memory Page Size Maximum: 65536 bytes 00:09:32.863 Persistent Memory Region: Not Supported 00:09:32.863 Optional Asynchronous Events Supported 00:09:32.863 Namespace Attribute Notices: Supported 00:09:32.863 Firmware Activation Notices: Not Supported 00:09:32.863 ANA Change Notices: Not Supported 00:09:32.863 PLE Aggregate Log Change Notices: Not Supported 00:09:32.863 LBA Status Info Alert Notices: Not Supported 00:09:32.863 EGE Aggregate Log Change Notices: Not Supported 00:09:32.863 Normal NVM Subsystem Shutdown event: Not Supported 00:09:32.863 Zone Descriptor Change Notices: Not Supported 00:09:32.863 Discovery Log Change Notices: Not Supported 00:09:32.863 Controller Attributes 00:09:32.863 128-bit Host Identifier: Not Supported 00:09:32.863 Non-Operational Permissive Mode: Not Supported 00:09:32.863 NVM Sets: Not Supported 00:09:32.863 Read Recovery Levels: Not Supported 00:09:32.863 Endurance Groups: Not Supported 00:09:32.863 Predictable Latency Mode: Not Supported 00:09:32.863 Traffic Based Keep ALive: Not Supported 00:09:32.863 Namespace Granularity: Not Supported 00:09:32.863 SQ Associations: Not Supported 00:09:32.863 UUID List: Not Supported 00:09:32.863 Multi-Domain Subsystem: Not Supported 00:09:32.863 Fixed Capacity Management: Not Supported 00:09:32.863 Variable Capacity Management: Not Supported 00:09:32.863 Delete Endurance Group: Not Supported 00:09:32.863 Delete NVM Set: Not Supported 00:09:32.863 Extended LBA Formats Supported: Supported 00:09:32.863 Flexible Data Placement Supported: Not Supported 00:09:32.863 00:09:32.863 Controller Memory Buffer Support 00:09:32.863 ================================ 00:09:32.863 Supported: No 00:09:32.863 00:09:32.863 Persistent Memory Region Support 00:09:32.863 ================================ 00:09:32.863 Supported: No 00:09:32.863 00:09:32.863 Admin Command Set Attributes 00:09:32.863 ============================ 00:09:32.863 Security Send/Receive: Not Supported 00:09:32.863 Format NVM: Supported 00:09:32.863 Firmware Activate/Download: Not Supported 00:09:32.863 Namespace Management: Supported 00:09:32.863 Device Self-Test: Not Supported 00:09:32.863 Directives: Supported 00:09:32.863 NVMe-MI: Not Supported 00:09:32.863 Virtualization Management: Not Supported 00:09:32.863 Doorbell Buffer Config: Supported 00:09:32.863 Get LBA Status Capability: Not Supported 00:09:32.863 Command & Feature Lockdown Capability: Not Supported 00:09:32.863 Abort Command Limit: 4 00:09:32.863 Async Event Request Limit: 4 00:09:32.863 Number of Firmware Slots: N/A 00:09:32.863 Firmware Slot 1 Read-Only: N/A 00:09:32.863 Firmware Activation Without Reset: N/A 00:09:32.863 Multiple Update Detection Support: N/A 00:09:32.863 Firmware Update Granularity: No Information Provided 00:09:32.863 Per-Namespace SMART Log: Yes 00:09:32.863 Asymmetric Namespace Access Log Page: Not Supported 00:09:32.863 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:32.863 Command Effects Log Page: Supported 00:09:32.863 Get Log Page Extended Data: Supported 00:09:32.863 Telemetry Log Pages: Not Supported 00:09:32.864 Persistent Event Log Pages: Not Supported 00:09:32.864 Supported Log Pages Log Page: May Support 00:09:32.864 Commands Supported & Effects Log Page: Not Supported 00:09:32.864 Feature Identifiers & Effects Log Page:May Support 00:09:32.864 NVMe-MI Commands & Effects Log Page: May Support 00:09:32.864 Data Area 4 for Telemetry Log: Not Supported 00:09:32.864 Error Log Page Entries Supported: 1 00:09:32.864 Keep Alive: Not Supported 00:09:32.864 00:09:32.864 NVM Command Set Attributes 00:09:32.864 ========================== 00:09:32.864 Submission Queue Entry Size 00:09:32.864 Max: 64 00:09:32.864 Min: 64 00:09:32.864 Completion Queue Entry Size 00:09:32.864 Max: 16 00:09:32.864 Min: 16 00:09:32.864 Number of Namespaces: 256 00:09:32.864 Compare Command: Supported 00:09:32.864 Write Uncorrectable Command: Not Supported 00:09:32.864 Dataset Management Command: Supported 00:09:32.864 Write Zeroes Command: Supported 00:09:32.864 Set Features Save Field: Supported 00:09:32.864 Reservations: Not Supported 00:09:32.864 Timestamp: Supported 00:09:32.864 Copy: Supported 00:09:32.864 Volatile Write Cache: Present 00:09:32.864 Atomic Write Unit (Normal): 1 00:09:32.864 Atomic Write Unit (PFail): 1 00:09:32.864 Atomic Compare & Write Unit: 1 00:09:32.864 Fused Compare & Write: Not Supported 00:09:32.864 Scatter-Gather List 00:09:32.864 SGL Command Set: Supported 00:09:32.864 SGL Keyed: Not Supported 00:09:32.864 SGL Bit Bucket Descriptor: Not Supported 00:09:32.864 SGL Metadata Pointer: Not Supported 00:09:32.864 Oversized SGL: Not Supported 00:09:32.864 SGL Metadata Address: Not Supported 00:09:32.864 SGL Offset: Not Supported 00:09:32.864 Transport SGL Data Block: Not Supported 00:09:32.864 Replay Protected Memory Block: Not Supported 00:09:32.864 00:09:32.864 Firmware Slot Information 00:09:32.864 ========================= 00:09:32.864 Active slot: 1 00:09:32.864 Slot 1 Firmware Revision: 1.0 00:09:32.864 00:09:32.864 00:09:32.864 Commands Supported and Effects 00:09:32.864 ============================== 00:09:32.864 Admin Commands 00:09:32.864 -------------- 00:09:32.864 Delete I/O Submission Queue (00h): Supported 00:09:32.864 Create I/O Submission Queue (01h): Supported 00:09:32.864 Get Log Page (02h): Supported 00:09:32.864 Delete I/O Completion Queue (04h): Supported 00:09:32.864 Create I/O Completion Queue (05h): Supported 00:09:32.864 Identify (06h): Supported 00:09:32.864 Abort (08h): Supported 00:09:32.864 Set Features (09h): Supported 00:09:32.864 Get Features (0Ah): Supported 00:09:32.864 Asynchronous Event Request (0Ch): Supported 00:09:32.864 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:32.864 Directive Send (19h): Supported 00:09:32.864 Directive Receive (1Ah): Supported 00:09:32.864 Virtualization Management (1Ch): Supported 00:09:32.864 Doorbell Buffer Config (7Ch): Supported 00:09:32.864 Format NVM (80h): Supported LBA-Change 00:09:32.864 I/O Commands 00:09:32.864 ------------ 00:09:32.864 Flush (00h): Supported LBA-Change 00:09:32.864 Write (01h): Supported LBA-Change 00:09:32.864 Read (02h): Supported 00:09:32.864 Compare (05h): Supported 00:09:32.864 Write Zeroes (08h): Supported LBA-Change 00:09:32.864 Dataset Management (09h): Supported LBA-Change 00:09:32.864 Unknown (0Ch): Supported 00:09:32.864 Unknown (12h): Supported 00:09:32.864 Copy (19h): Supported LBA-Change 00:09:32.864 Unknown (1Dh): Supported LBA-Change 00:09:32.864 00:09:32.864 Error Log 00:09:32.864 ========= 00:09:32.864 00:09:32.864 Arbitration 00:09:32.864 =========== 00:09:32.864 Arbitration Burst: no limit 00:09:32.864 00:09:32.864 Power Management 00:09:32.864 ================ 00:09:32.864 Number of Power States: 1 00:09:32.864 Current Power State: Power State #0 00:09:32.864 Power State #0: 00:09:32.864 Max Power: 25.00 W 00:09:32.864 Non-Operational State: Operational 00:09:32.864 Entry Latency: 16 microseconds 00:09:32.864 Exit Latency: 4 microseconds 00:09:32.864 Relative Read Throughput: 0 00:09:32.864 Relative Read Latency: 0 00:09:32.864 Relative Write Throughput: 0 00:09:32.864 Relative Write Latency: 0 00:09:32.864 Idle Power: Not Reported 00:09:32.864 Active Power: Not Reported 00:09:32.864 Non-Operational Permissive Mode: Not Supported 00:09:32.864 00:09:32.864 Health Information 00:09:32.864 ================== 00:09:32.864 Critical Warnings: 00:09:32.864 Available Spare Space: OK 00:09:32.864 Temperature: OK 00:09:32.864 Device Reliability: OK 00:09:32.864 Read Only: No 00:09:32.864 Volatile Memory Backup: OK 00:09:32.864 Current Temperature: 323 Kelvin (50 Celsius) 00:09:32.864 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:32.864 Available Spare: 0% 00:09:32.864 Available Spare Threshold: 0% 00:09:32.864 Life Percentage Used: 0% 00:09:32.864 Data Units Read: 745 00:09:32.864 Data Units Written: 596 00:09:32.864 Host Read Commands: 33987 00:09:32.864 Host Write Commands: 31762 00:09:32.864 Controller Busy Time: 0 minutes 00:09:32.864 Power Cycles: 0 00:09:32.864 Power On Hours: 0 hours 00:09:32.864 Unsafe Shutdowns: 0 00:09:32.864 Unrecoverable Media Errors: 0 00:09:32.864 Lifetime Error Log Entries: 0 00:09:32.864 Warning Temperature Time: 0 minutes 00:09:32.864 Critical Temperature Time: 0 minutes 00:09:32.864 00:09:32.864 Number of Queues 00:09:32.864 ================ 00:09:32.864 Number of I/O Submission Queues: 64 00:09:32.864 Number of I/O Completion Queues: 64 00:09:32.864 00:09:32.864 ZNS Specific Controller Data 00:09:32.864 ============================ 00:09:32.864 Zone Append Size Limit: 0 00:09:32.864 00:09:32.864 00:09:32.864 Active Namespaces 00:09:32.864 ================= 00:09:32.864 Namespace ID:1 00:09:32.864 Error Recovery Timeout: Unlimited 00:09:32.864 Command Set Identifier: NVM (00h) 00:09:32.864 Deallocate: Supported 00:09:32.864 Deallocated/Unwritten Error: Supported 00:09:32.864 Deallocated Read Value: All 0x00 00:09:32.864 Deallocate in Write Zeroes: Not Supported 00:09:32.864 Deallocated Guard Field: 0xFFFF 00:09:32.864 Flush: Supported 00:09:32.864 Reservation: Not Supported 00:09:32.864 Namespace Sharing Capabilities: Private 00:09:32.864 Size (in LBAs): 1310720 (5GiB) 00:09:32.864 Capacity (in LBAs): 1310720 (5GiB) 00:09:32.864 Utilization (in LBAs): 1310720 (5GiB) 00:09:32.864 Thin Provisioning: Not Supported 00:09:32.864 Per-NS Atomic Units: No 00:09:32.864 Maximum Single Source Range Length: 128 00:09:32.864 Maximum Copy Length: 128 00:09:32.864 Maximum Source Range Count: 128 00:09:32.864 NGUID/EUI64 Never Reused: No 00:09:32.864 Namespace Write Protected: No 00:09:32.864 Number of LBA Formats: 8 00:09:32.864 Current LBA Format: LBA Format #04 00:09:32.864 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:32.864 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:32.864 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:32.864 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:32.864 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:32.864 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:32.864 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:32.864 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:32.864 00:09:32.864 NVM Specific Namespace Data 00:09:32.864 =========================== 00:09:32.864 Logical Block Storage Tag Mask: 0 00:09:32.864 Protection Information Capabilities: 00:09:32.864 16b Guard Protection Information Storage Tag Support: No 00:09:32.864 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:32.864 Storage Tag Check Read Support: No 00:09:32.864 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.864 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.864 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.864 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.864 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.864 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.864 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.864 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:32.864 15:17:46 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:32.864 15:17:46 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:33.124 ===================================================== 00:09:33.124 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:33.124 ===================================================== 00:09:33.124 Controller Capabilities/Features 00:09:33.124 ================================ 00:09:33.124 Vendor ID: 1b36 00:09:33.124 Subsystem Vendor ID: 1af4 00:09:33.124 Serial Number: 12342 00:09:33.124 Model Number: QEMU NVMe Ctrl 00:09:33.124 Firmware Version: 8.0.0 00:09:33.124 Recommended Arb Burst: 6 00:09:33.124 IEEE OUI Identifier: 00 54 52 00:09:33.124 Multi-path I/O 00:09:33.124 May have multiple subsystem ports: No 00:09:33.124 May have multiple controllers: No 00:09:33.124 Associated with SR-IOV VF: No 00:09:33.124 Max Data Transfer Size: 524288 00:09:33.124 Max Number of Namespaces: 256 00:09:33.124 Max Number of I/O Queues: 64 00:09:33.124 NVMe Specification Version (VS): 1.4 00:09:33.124 NVMe Specification Version (Identify): 1.4 00:09:33.124 Maximum Queue Entries: 2048 00:09:33.124 Contiguous Queues Required: Yes 00:09:33.124 Arbitration Mechanisms Supported 00:09:33.124 Weighted Round Robin: Not Supported 00:09:33.124 Vendor Specific: Not Supported 00:09:33.124 Reset Timeout: 7500 ms 00:09:33.124 Doorbell Stride: 4 bytes 00:09:33.124 NVM Subsystem Reset: Not Supported 00:09:33.124 Command Sets Supported 00:09:33.124 NVM Command Set: Supported 00:09:33.124 Boot Partition: Not Supported 00:09:33.124 Memory Page Size Minimum: 4096 bytes 00:09:33.124 Memory Page Size Maximum: 65536 bytes 00:09:33.124 Persistent Memory Region: Not Supported 00:09:33.124 Optional Asynchronous Events Supported 00:09:33.124 Namespace Attribute Notices: Supported 00:09:33.124 Firmware Activation Notices: Not Supported 00:09:33.124 ANA Change Notices: Not Supported 00:09:33.125 PLE Aggregate Log Change Notices: Not Supported 00:09:33.125 LBA Status Info Alert Notices: Not Supported 00:09:33.125 EGE Aggregate Log Change Notices: Not Supported 00:09:33.125 Normal NVM Subsystem Shutdown event: Not Supported 00:09:33.125 Zone Descriptor Change Notices: Not Supported 00:09:33.125 Discovery Log Change Notices: Not Supported 00:09:33.125 Controller Attributes 00:09:33.125 128-bit Host Identifier: Not Supported 00:09:33.125 Non-Operational Permissive Mode: Not Supported 00:09:33.125 NVM Sets: Not Supported 00:09:33.125 Read Recovery Levels: Not Supported 00:09:33.125 Endurance Groups: Not Supported 00:09:33.125 Predictable Latency Mode: Not Supported 00:09:33.125 Traffic Based Keep ALive: Not Supported 00:09:33.125 Namespace Granularity: Not Supported 00:09:33.125 SQ Associations: Not Supported 00:09:33.125 UUID List: Not Supported 00:09:33.125 Multi-Domain Subsystem: Not Supported 00:09:33.125 Fixed Capacity Management: Not Supported 00:09:33.125 Variable Capacity Management: Not Supported 00:09:33.125 Delete Endurance Group: Not Supported 00:09:33.125 Delete NVM Set: Not Supported 00:09:33.125 Extended LBA Formats Supported: Supported 00:09:33.125 Flexible Data Placement Supported: Not Supported 00:09:33.125 00:09:33.125 Controller Memory Buffer Support 00:09:33.125 ================================ 00:09:33.125 Supported: No 00:09:33.125 00:09:33.125 Persistent Memory Region Support 00:09:33.125 ================================ 00:09:33.125 Supported: No 00:09:33.125 00:09:33.125 Admin Command Set Attributes 00:09:33.125 ============================ 00:09:33.125 Security Send/Receive: Not Supported 00:09:33.125 Format NVM: Supported 00:09:33.125 Firmware Activate/Download: Not Supported 00:09:33.125 Namespace Management: Supported 00:09:33.125 Device Self-Test: Not Supported 00:09:33.125 Directives: Supported 00:09:33.125 NVMe-MI: Not Supported 00:09:33.125 Virtualization Management: Not Supported 00:09:33.125 Doorbell Buffer Config: Supported 00:09:33.125 Get LBA Status Capability: Not Supported 00:09:33.125 Command & Feature Lockdown Capability: Not Supported 00:09:33.125 Abort Command Limit: 4 00:09:33.125 Async Event Request Limit: 4 00:09:33.125 Number of Firmware Slots: N/A 00:09:33.125 Firmware Slot 1 Read-Only: N/A 00:09:33.125 Firmware Activation Without Reset: N/A 00:09:33.125 Multiple Update Detection Support: N/A 00:09:33.125 Firmware Update Granularity: No Information Provided 00:09:33.125 Per-Namespace SMART Log: Yes 00:09:33.125 Asymmetric Namespace Access Log Page: Not Supported 00:09:33.125 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:33.125 Command Effects Log Page: Supported 00:09:33.125 Get Log Page Extended Data: Supported 00:09:33.125 Telemetry Log Pages: Not Supported 00:09:33.125 Persistent Event Log Pages: Not Supported 00:09:33.125 Supported Log Pages Log Page: May Support 00:09:33.125 Commands Supported & Effects Log Page: Not Supported 00:09:33.125 Feature Identifiers & Effects Log Page:May Support 00:09:33.125 NVMe-MI Commands & Effects Log Page: May Support 00:09:33.125 Data Area 4 for Telemetry Log: Not Supported 00:09:33.125 Error Log Page Entries Supported: 1 00:09:33.125 Keep Alive: Not Supported 00:09:33.125 00:09:33.125 NVM Command Set Attributes 00:09:33.125 ========================== 00:09:33.125 Submission Queue Entry Size 00:09:33.125 Max: 64 00:09:33.125 Min: 64 00:09:33.125 Completion Queue Entry Size 00:09:33.125 Max: 16 00:09:33.125 Min: 16 00:09:33.125 Number of Namespaces: 256 00:09:33.125 Compare Command: Supported 00:09:33.125 Write Uncorrectable Command: Not Supported 00:09:33.125 Dataset Management Command: Supported 00:09:33.125 Write Zeroes Command: Supported 00:09:33.125 Set Features Save Field: Supported 00:09:33.125 Reservations: Not Supported 00:09:33.125 Timestamp: Supported 00:09:33.125 Copy: Supported 00:09:33.125 Volatile Write Cache: Present 00:09:33.125 Atomic Write Unit (Normal): 1 00:09:33.125 Atomic Write Unit (PFail): 1 00:09:33.125 Atomic Compare & Write Unit: 1 00:09:33.125 Fused Compare & Write: Not Supported 00:09:33.125 Scatter-Gather List 00:09:33.125 SGL Command Set: Supported 00:09:33.125 SGL Keyed: Not Supported 00:09:33.125 SGL Bit Bucket Descriptor: Not Supported 00:09:33.125 SGL Metadata Pointer: Not Supported 00:09:33.125 Oversized SGL: Not Supported 00:09:33.125 SGL Metadata Address: Not Supported 00:09:33.125 SGL Offset: Not Supported 00:09:33.125 Transport SGL Data Block: Not Supported 00:09:33.125 Replay Protected Memory Block: Not Supported 00:09:33.125 00:09:33.125 Firmware Slot Information 00:09:33.125 ========================= 00:09:33.125 Active slot: 1 00:09:33.125 Slot 1 Firmware Revision: 1.0 00:09:33.125 00:09:33.125 00:09:33.125 Commands Supported and Effects 00:09:33.125 ============================== 00:09:33.125 Admin Commands 00:09:33.125 -------------- 00:09:33.125 Delete I/O Submission Queue (00h): Supported 00:09:33.125 Create I/O Submission Queue (01h): Supported 00:09:33.125 Get Log Page (02h): Supported 00:09:33.125 Delete I/O Completion Queue (04h): Supported 00:09:33.125 Create I/O Completion Queue (05h): Supported 00:09:33.125 Identify (06h): Supported 00:09:33.125 Abort (08h): Supported 00:09:33.125 Set Features (09h): Supported 00:09:33.125 Get Features (0Ah): Supported 00:09:33.125 Asynchronous Event Request (0Ch): Supported 00:09:33.125 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:33.125 Directive Send (19h): Supported 00:09:33.125 Directive Receive (1Ah): Supported 00:09:33.125 Virtualization Management (1Ch): Supported 00:09:33.125 Doorbell Buffer Config (7Ch): Supported 00:09:33.125 Format NVM (80h): Supported LBA-Change 00:09:33.125 I/O Commands 00:09:33.125 ------------ 00:09:33.125 Flush (00h): Supported LBA-Change 00:09:33.125 Write (01h): Supported LBA-Change 00:09:33.125 Read (02h): Supported 00:09:33.125 Compare (05h): Supported 00:09:33.125 Write Zeroes (08h): Supported LBA-Change 00:09:33.125 Dataset Management (09h): Supported LBA-Change 00:09:33.125 Unknown (0Ch): Supported 00:09:33.125 Unknown (12h): Supported 00:09:33.125 Copy (19h): Supported LBA-Change 00:09:33.125 Unknown (1Dh): Supported LBA-Change 00:09:33.125 00:09:33.125 Error Log 00:09:33.125 ========= 00:09:33.125 00:09:33.125 Arbitration 00:09:33.125 =========== 00:09:33.125 Arbitration Burst: no limit 00:09:33.125 00:09:33.125 Power Management 00:09:33.125 ================ 00:09:33.125 Number of Power States: 1 00:09:33.125 Current Power State: Power State #0 00:09:33.125 Power State #0: 00:09:33.125 Max Power: 25.00 W 00:09:33.125 Non-Operational State: Operational 00:09:33.125 Entry Latency: 16 microseconds 00:09:33.125 Exit Latency: 4 microseconds 00:09:33.125 Relative Read Throughput: 0 00:09:33.125 Relative Read Latency: 0 00:09:33.125 Relative Write Throughput: 0 00:09:33.125 Relative Write Latency: 0 00:09:33.125 Idle Power: Not Reported 00:09:33.125 Active Power: Not Reported 00:09:33.125 Non-Operational Permissive Mode: Not Supported 00:09:33.125 00:09:33.125 Health Information 00:09:33.125 ================== 00:09:33.125 Critical Warnings: 00:09:33.125 Available Spare Space: OK 00:09:33.125 Temperature: OK 00:09:33.125 Device Reliability: OK 00:09:33.125 Read Only: No 00:09:33.125 Volatile Memory Backup: OK 00:09:33.125 Current Temperature: 323 Kelvin (50 Celsius) 00:09:33.125 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:33.125 Available Spare: 0% 00:09:33.125 Available Spare Threshold: 0% 00:09:33.125 Life Percentage Used: 0% 00:09:33.125 Data Units Read: 2177 00:09:33.125 Data Units Written: 1857 00:09:33.125 Host Read Commands: 100033 00:09:33.125 Host Write Commands: 95803 00:09:33.125 Controller Busy Time: 0 minutes 00:09:33.125 Power Cycles: 0 00:09:33.125 Power On Hours: 0 hours 00:09:33.125 Unsafe Shutdowns: 0 00:09:33.125 Unrecoverable Media Errors: 0 00:09:33.125 Lifetime Error Log Entries: 0 00:09:33.125 Warning Temperature Time: 0 minutes 00:09:33.125 Critical Temperature Time: 0 minutes 00:09:33.125 00:09:33.125 Number of Queues 00:09:33.125 ================ 00:09:33.125 Number of I/O Submission Queues: 64 00:09:33.125 Number of I/O Completion Queues: 64 00:09:33.125 00:09:33.125 ZNS Specific Controller Data 00:09:33.125 ============================ 00:09:33.125 Zone Append Size Limit: 0 00:09:33.125 00:09:33.125 00:09:33.125 Active Namespaces 00:09:33.125 ================= 00:09:33.125 Namespace ID:1 00:09:33.125 Error Recovery Timeout: Unlimited 00:09:33.125 Command Set Identifier: NVM (00h) 00:09:33.125 Deallocate: Supported 00:09:33.125 Deallocated/Unwritten Error: Supported 00:09:33.125 Deallocated Read Value: All 0x00 00:09:33.125 Deallocate in Write Zeroes: Not Supported 00:09:33.125 Deallocated Guard Field: 0xFFFF 00:09:33.126 Flush: Supported 00:09:33.126 Reservation: Not Supported 00:09:33.126 Namespace Sharing Capabilities: Private 00:09:33.126 Size (in LBAs): 1048576 (4GiB) 00:09:33.126 Capacity (in LBAs): 1048576 (4GiB) 00:09:33.126 Utilization (in LBAs): 1048576 (4GiB) 00:09:33.126 Thin Provisioning: Not Supported 00:09:33.126 Per-NS Atomic Units: No 00:09:33.126 Maximum Single Source Range Length: 128 00:09:33.126 Maximum Copy Length: 128 00:09:33.126 Maximum Source Range Count: 128 00:09:33.126 NGUID/EUI64 Never Reused: No 00:09:33.126 Namespace Write Protected: No 00:09:33.126 Number of LBA Formats: 8 00:09:33.126 Current LBA Format: LBA Format #04 00:09:33.126 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:33.126 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:33.126 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:33.126 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:33.126 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:33.126 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:33.126 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:33.126 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:33.126 00:09:33.126 NVM Specific Namespace Data 00:09:33.126 =========================== 00:09:33.126 Logical Block Storage Tag Mask: 0 00:09:33.126 Protection Information Capabilities: 00:09:33.126 16b Guard Protection Information Storage Tag Support: No 00:09:33.126 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:33.126 Storage Tag Check Read Support: No 00:09:33.126 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Namespace ID:2 00:09:33.126 Error Recovery Timeout: Unlimited 00:09:33.126 Command Set Identifier: NVM (00h) 00:09:33.126 Deallocate: Supported 00:09:33.126 Deallocated/Unwritten Error: Supported 00:09:33.126 Deallocated Read Value: All 0x00 00:09:33.126 Deallocate in Write Zeroes: Not Supported 00:09:33.126 Deallocated Guard Field: 0xFFFF 00:09:33.126 Flush: Supported 00:09:33.126 Reservation: Not Supported 00:09:33.126 Namespace Sharing Capabilities: Private 00:09:33.126 Size (in LBAs): 1048576 (4GiB) 00:09:33.126 Capacity (in LBAs): 1048576 (4GiB) 00:09:33.126 Utilization (in LBAs): 1048576 (4GiB) 00:09:33.126 Thin Provisioning: Not Supported 00:09:33.126 Per-NS Atomic Units: No 00:09:33.126 Maximum Single Source Range Length: 128 00:09:33.126 Maximum Copy Length: 128 00:09:33.126 Maximum Source Range Count: 128 00:09:33.126 NGUID/EUI64 Never Reused: No 00:09:33.126 Namespace Write Protected: No 00:09:33.126 Number of LBA Formats: 8 00:09:33.126 Current LBA Format: LBA Format #04 00:09:33.126 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:33.126 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:33.126 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:33.126 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:33.126 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:33.126 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:33.126 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:33.126 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:33.126 00:09:33.126 NVM Specific Namespace Data 00:09:33.126 =========================== 00:09:33.126 Logical Block Storage Tag Mask: 0 00:09:33.126 Protection Information Capabilities: 00:09:33.126 16b Guard Protection Information Storage Tag Support: No 00:09:33.126 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:33.126 Storage Tag Check Read Support: No 00:09:33.126 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Namespace ID:3 00:09:33.126 Error Recovery Timeout: Unlimited 00:09:33.126 Command Set Identifier: NVM (00h) 00:09:33.126 Deallocate: Supported 00:09:33.126 Deallocated/Unwritten Error: Supported 00:09:33.126 Deallocated Read Value: All 0x00 00:09:33.126 Deallocate in Write Zeroes: Not Supported 00:09:33.126 Deallocated Guard Field: 0xFFFF 00:09:33.126 Flush: Supported 00:09:33.126 Reservation: Not Supported 00:09:33.126 Namespace Sharing Capabilities: Private 00:09:33.126 Size (in LBAs): 1048576 (4GiB) 00:09:33.126 Capacity (in LBAs): 1048576 (4GiB) 00:09:33.126 Utilization (in LBAs): 1048576 (4GiB) 00:09:33.126 Thin Provisioning: Not Supported 00:09:33.126 Per-NS Atomic Units: No 00:09:33.126 Maximum Single Source Range Length: 128 00:09:33.126 Maximum Copy Length: 128 00:09:33.126 Maximum Source Range Count: 128 00:09:33.126 NGUID/EUI64 Never Reused: No 00:09:33.126 Namespace Write Protected: No 00:09:33.126 Number of LBA Formats: 8 00:09:33.126 Current LBA Format: LBA Format #04 00:09:33.126 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:33.126 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:33.126 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:33.126 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:33.126 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:33.126 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:33.126 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:33.126 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:33.126 00:09:33.126 NVM Specific Namespace Data 00:09:33.126 =========================== 00:09:33.126 Logical Block Storage Tag Mask: 0 00:09:33.126 Protection Information Capabilities: 00:09:33.126 16b Guard Protection Information Storage Tag Support: No 00:09:33.126 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:33.126 Storage Tag Check Read Support: No 00:09:33.126 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.126 15:17:46 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:33.126 15:17:46 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:33.386 ===================================================== 00:09:33.386 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:33.386 ===================================================== 00:09:33.386 Controller Capabilities/Features 00:09:33.386 ================================ 00:09:33.386 Vendor ID: 1b36 00:09:33.386 Subsystem Vendor ID: 1af4 00:09:33.386 Serial Number: 12343 00:09:33.386 Model Number: QEMU NVMe Ctrl 00:09:33.386 Firmware Version: 8.0.0 00:09:33.386 Recommended Arb Burst: 6 00:09:33.386 IEEE OUI Identifier: 00 54 52 00:09:33.386 Multi-path I/O 00:09:33.386 May have multiple subsystem ports: No 00:09:33.386 May have multiple controllers: Yes 00:09:33.386 Associated with SR-IOV VF: No 00:09:33.386 Max Data Transfer Size: 524288 00:09:33.386 Max Number of Namespaces: 256 00:09:33.386 Max Number of I/O Queues: 64 00:09:33.386 NVMe Specification Version (VS): 1.4 00:09:33.386 NVMe Specification Version (Identify): 1.4 00:09:33.386 Maximum Queue Entries: 2048 00:09:33.386 Contiguous Queues Required: Yes 00:09:33.386 Arbitration Mechanisms Supported 00:09:33.386 Weighted Round Robin: Not Supported 00:09:33.386 Vendor Specific: Not Supported 00:09:33.386 Reset Timeout: 7500 ms 00:09:33.386 Doorbell Stride: 4 bytes 00:09:33.386 NVM Subsystem Reset: Not Supported 00:09:33.386 Command Sets Supported 00:09:33.386 NVM Command Set: Supported 00:09:33.386 Boot Partition: Not Supported 00:09:33.386 Memory Page Size Minimum: 4096 bytes 00:09:33.386 Memory Page Size Maximum: 65536 bytes 00:09:33.386 Persistent Memory Region: Not Supported 00:09:33.386 Optional Asynchronous Events Supported 00:09:33.386 Namespace Attribute Notices: Supported 00:09:33.386 Firmware Activation Notices: Not Supported 00:09:33.386 ANA Change Notices: Not Supported 00:09:33.386 PLE Aggregate Log Change Notices: Not Supported 00:09:33.386 LBA Status Info Alert Notices: Not Supported 00:09:33.386 EGE Aggregate Log Change Notices: Not Supported 00:09:33.386 Normal NVM Subsystem Shutdown event: Not Supported 00:09:33.386 Zone Descriptor Change Notices: Not Supported 00:09:33.386 Discovery Log Change Notices: Not Supported 00:09:33.386 Controller Attributes 00:09:33.386 128-bit Host Identifier: Not Supported 00:09:33.386 Non-Operational Permissive Mode: Not Supported 00:09:33.386 NVM Sets: Not Supported 00:09:33.386 Read Recovery Levels: Not Supported 00:09:33.386 Endurance Groups: Supported 00:09:33.386 Predictable Latency Mode: Not Supported 00:09:33.386 Traffic Based Keep ALive: Not Supported 00:09:33.386 Namespace Granularity: Not Supported 00:09:33.386 SQ Associations: Not Supported 00:09:33.386 UUID List: Not Supported 00:09:33.386 Multi-Domain Subsystem: Not Supported 00:09:33.386 Fixed Capacity Management: Not Supported 00:09:33.386 Variable Capacity Management: Not Supported 00:09:33.386 Delete Endurance Group: Not Supported 00:09:33.386 Delete NVM Set: Not Supported 00:09:33.386 Extended LBA Formats Supported: Supported 00:09:33.386 Flexible Data Placement Supported: Supported 00:09:33.386 00:09:33.386 Controller Memory Buffer Support 00:09:33.386 ================================ 00:09:33.386 Supported: No 00:09:33.386 00:09:33.386 Persistent Memory Region Support 00:09:33.386 ================================ 00:09:33.386 Supported: No 00:09:33.386 00:09:33.386 Admin Command Set Attributes 00:09:33.386 ============================ 00:09:33.386 Security Send/Receive: Not Supported 00:09:33.386 Format NVM: Supported 00:09:33.386 Firmware Activate/Download: Not Supported 00:09:33.386 Namespace Management: Supported 00:09:33.386 Device Self-Test: Not Supported 00:09:33.386 Directives: Supported 00:09:33.386 NVMe-MI: Not Supported 00:09:33.386 Virtualization Management: Not Supported 00:09:33.386 Doorbell Buffer Config: Supported 00:09:33.386 Get LBA Status Capability: Not Supported 00:09:33.386 Command & Feature Lockdown Capability: Not Supported 00:09:33.386 Abort Command Limit: 4 00:09:33.386 Async Event Request Limit: 4 00:09:33.386 Number of Firmware Slots: N/A 00:09:33.386 Firmware Slot 1 Read-Only: N/A 00:09:33.386 Firmware Activation Without Reset: N/A 00:09:33.386 Multiple Update Detection Support: N/A 00:09:33.386 Firmware Update Granularity: No Information Provided 00:09:33.386 Per-Namespace SMART Log: Yes 00:09:33.386 Asymmetric Namespace Access Log Page: Not Supported 00:09:33.386 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:33.386 Command Effects Log Page: Supported 00:09:33.386 Get Log Page Extended Data: Supported 00:09:33.386 Telemetry Log Pages: Not Supported 00:09:33.386 Persistent Event Log Pages: Not Supported 00:09:33.386 Supported Log Pages Log Page: May Support 00:09:33.386 Commands Supported & Effects Log Page: Not Supported 00:09:33.386 Feature Identifiers & Effects Log Page:May Support 00:09:33.386 NVMe-MI Commands & Effects Log Page: May Support 00:09:33.386 Data Area 4 for Telemetry Log: Not Supported 00:09:33.386 Error Log Page Entries Supported: 1 00:09:33.386 Keep Alive: Not Supported 00:09:33.386 00:09:33.387 NVM Command Set Attributes 00:09:33.387 ========================== 00:09:33.387 Submission Queue Entry Size 00:09:33.387 Max: 64 00:09:33.387 Min: 64 00:09:33.387 Completion Queue Entry Size 00:09:33.387 Max: 16 00:09:33.387 Min: 16 00:09:33.387 Number of Namespaces: 256 00:09:33.387 Compare Command: Supported 00:09:33.387 Write Uncorrectable Command: Not Supported 00:09:33.387 Dataset Management Command: Supported 00:09:33.387 Write Zeroes Command: Supported 00:09:33.387 Set Features Save Field: Supported 00:09:33.387 Reservations: Not Supported 00:09:33.387 Timestamp: Supported 00:09:33.387 Copy: Supported 00:09:33.387 Volatile Write Cache: Present 00:09:33.387 Atomic Write Unit (Normal): 1 00:09:33.387 Atomic Write Unit (PFail): 1 00:09:33.387 Atomic Compare & Write Unit: 1 00:09:33.387 Fused Compare & Write: Not Supported 00:09:33.387 Scatter-Gather List 00:09:33.387 SGL Command Set: Supported 00:09:33.387 SGL Keyed: Not Supported 00:09:33.387 SGL Bit Bucket Descriptor: Not Supported 00:09:33.387 SGL Metadata Pointer: Not Supported 00:09:33.387 Oversized SGL: Not Supported 00:09:33.387 SGL Metadata Address: Not Supported 00:09:33.387 SGL Offset: Not Supported 00:09:33.387 Transport SGL Data Block: Not Supported 00:09:33.387 Replay Protected Memory Block: Not Supported 00:09:33.387 00:09:33.387 Firmware Slot Information 00:09:33.387 ========================= 00:09:33.387 Active slot: 1 00:09:33.387 Slot 1 Firmware Revision: 1.0 00:09:33.387 00:09:33.387 00:09:33.387 Commands Supported and Effects 00:09:33.387 ============================== 00:09:33.387 Admin Commands 00:09:33.387 -------------- 00:09:33.387 Delete I/O Submission Queue (00h): Supported 00:09:33.387 Create I/O Submission Queue (01h): Supported 00:09:33.387 Get Log Page (02h): Supported 00:09:33.387 Delete I/O Completion Queue (04h): Supported 00:09:33.387 Create I/O Completion Queue (05h): Supported 00:09:33.387 Identify (06h): Supported 00:09:33.387 Abort (08h): Supported 00:09:33.387 Set Features (09h): Supported 00:09:33.387 Get Features (0Ah): Supported 00:09:33.387 Asynchronous Event Request (0Ch): Supported 00:09:33.387 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:33.387 Directive Send (19h): Supported 00:09:33.387 Directive Receive (1Ah): Supported 00:09:33.387 Virtualization Management (1Ch): Supported 00:09:33.387 Doorbell Buffer Config (7Ch): Supported 00:09:33.387 Format NVM (80h): Supported LBA-Change 00:09:33.387 I/O Commands 00:09:33.387 ------------ 00:09:33.387 Flush (00h): Supported LBA-Change 00:09:33.387 Write (01h): Supported LBA-Change 00:09:33.387 Read (02h): Supported 00:09:33.387 Compare (05h): Supported 00:09:33.387 Write Zeroes (08h): Supported LBA-Change 00:09:33.387 Dataset Management (09h): Supported LBA-Change 00:09:33.387 Unknown (0Ch): Supported 00:09:33.387 Unknown (12h): Supported 00:09:33.387 Copy (19h): Supported LBA-Change 00:09:33.387 Unknown (1Dh): Supported LBA-Change 00:09:33.387 00:09:33.387 Error Log 00:09:33.387 ========= 00:09:33.387 00:09:33.387 Arbitration 00:09:33.387 =========== 00:09:33.387 Arbitration Burst: no limit 00:09:33.387 00:09:33.387 Power Management 00:09:33.387 ================ 00:09:33.387 Number of Power States: 1 00:09:33.387 Current Power State: Power State #0 00:09:33.387 Power State #0: 00:09:33.387 Max Power: 25.00 W 00:09:33.387 Non-Operational State: Operational 00:09:33.387 Entry Latency: 16 microseconds 00:09:33.387 Exit Latency: 4 microseconds 00:09:33.387 Relative Read Throughput: 0 00:09:33.387 Relative Read Latency: 0 00:09:33.387 Relative Write Throughput: 0 00:09:33.387 Relative Write Latency: 0 00:09:33.387 Idle Power: Not Reported 00:09:33.387 Active Power: Not Reported 00:09:33.387 Non-Operational Permissive Mode: Not Supported 00:09:33.387 00:09:33.387 Health Information 00:09:33.387 ================== 00:09:33.387 Critical Warnings: 00:09:33.387 Available Spare Space: OK 00:09:33.387 Temperature: OK 00:09:33.387 Device Reliability: OK 00:09:33.387 Read Only: No 00:09:33.387 Volatile Memory Backup: OK 00:09:33.387 Current Temperature: 323 Kelvin (50 Celsius) 00:09:33.387 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:33.387 Available Spare: 0% 00:09:33.387 Available Spare Threshold: 0% 00:09:33.387 Life Percentage Used: 0% 00:09:33.387 Data Units Read: 798 00:09:33.387 Data Units Written: 692 00:09:33.387 Host Read Commands: 33987 00:09:33.387 Host Write Commands: 32577 00:09:33.387 Controller Busy Time: 0 minutes 00:09:33.387 Power Cycles: 0 00:09:33.387 Power On Hours: 0 hours 00:09:33.387 Unsafe Shutdowns: 0 00:09:33.387 Unrecoverable Media Errors: 0 00:09:33.387 Lifetime Error Log Entries: 0 00:09:33.387 Warning Temperature Time: 0 minutes 00:09:33.387 Critical Temperature Time: 0 minutes 00:09:33.387 00:09:33.387 Number of Queues 00:09:33.387 ================ 00:09:33.387 Number of I/O Submission Queues: 64 00:09:33.387 Number of I/O Completion Queues: 64 00:09:33.387 00:09:33.387 ZNS Specific Controller Data 00:09:33.387 ============================ 00:09:33.387 Zone Append Size Limit: 0 00:09:33.387 00:09:33.387 00:09:33.387 Active Namespaces 00:09:33.387 ================= 00:09:33.387 Namespace ID:1 00:09:33.387 Error Recovery Timeout: Unlimited 00:09:33.387 Command Set Identifier: NVM (00h) 00:09:33.387 Deallocate: Supported 00:09:33.387 Deallocated/Unwritten Error: Supported 00:09:33.387 Deallocated Read Value: All 0x00 00:09:33.387 Deallocate in Write Zeroes: Not Supported 00:09:33.387 Deallocated Guard Field: 0xFFFF 00:09:33.387 Flush: Supported 00:09:33.387 Reservation: Not Supported 00:09:33.387 Namespace Sharing Capabilities: Multiple Controllers 00:09:33.387 Size (in LBAs): 262144 (1GiB) 00:09:33.387 Capacity (in LBAs): 262144 (1GiB) 00:09:33.387 Utilization (in LBAs): 262144 (1GiB) 00:09:33.387 Thin Provisioning: Not Supported 00:09:33.387 Per-NS Atomic Units: No 00:09:33.387 Maximum Single Source Range Length: 128 00:09:33.387 Maximum Copy Length: 128 00:09:33.387 Maximum Source Range Count: 128 00:09:33.387 NGUID/EUI64 Never Reused: No 00:09:33.387 Namespace Write Protected: No 00:09:33.387 Endurance group ID: 1 00:09:33.387 Number of LBA Formats: 8 00:09:33.387 Current LBA Format: LBA Format #04 00:09:33.387 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:33.387 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:33.387 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:33.387 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:33.387 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:33.387 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:33.387 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:33.387 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:33.387 00:09:33.387 Get Feature FDP: 00:09:33.387 ================ 00:09:33.387 Enabled: Yes 00:09:33.387 FDP configuration index: 0 00:09:33.387 00:09:33.387 FDP configurations log page 00:09:33.387 =========================== 00:09:33.387 Number of FDP configurations: 1 00:09:33.387 Version: 0 00:09:33.387 Size: 112 00:09:33.387 FDP Configuration Descriptor: 0 00:09:33.387 Descriptor Size: 96 00:09:33.387 Reclaim Group Identifier format: 2 00:09:33.387 FDP Volatile Write Cache: Not Present 00:09:33.387 FDP Configuration: Valid 00:09:33.387 Vendor Specific Size: 0 00:09:33.387 Number of Reclaim Groups: 2 00:09:33.387 Number of Recalim Unit Handles: 8 00:09:33.387 Max Placement Identifiers: 128 00:09:33.387 Number of Namespaces Suppprted: 256 00:09:33.387 Reclaim unit Nominal Size: 6000000 bytes 00:09:33.387 Estimated Reclaim Unit Time Limit: Not Reported 00:09:33.387 RUH Desc #000: RUH Type: Initially Isolated 00:09:33.387 RUH Desc #001: RUH Type: Initially Isolated 00:09:33.387 RUH Desc #002: RUH Type: Initially Isolated 00:09:33.387 RUH Desc #003: RUH Type: Initially Isolated 00:09:33.387 RUH Desc #004: RUH Type: Initially Isolated 00:09:33.387 RUH Desc #005: RUH Type: Initially Isolated 00:09:33.387 RUH Desc #006: RUH Type: Initially Isolated 00:09:33.387 RUH Desc #007: RUH Type: Initially Isolated 00:09:33.387 00:09:33.387 FDP reclaim unit handle usage log page 00:09:33.387 ====================================== 00:09:33.387 Number of Reclaim Unit Handles: 8 00:09:33.387 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:33.387 RUH Usage Desc #001: RUH Attributes: Unused 00:09:33.387 RUH Usage Desc #002: RUH Attributes: Unused 00:09:33.387 RUH Usage Desc #003: RUH Attributes: Unused 00:09:33.387 RUH Usage Desc #004: RUH Attributes: Unused 00:09:33.388 RUH Usage Desc #005: RUH Attributes: Unused 00:09:33.388 RUH Usage Desc #006: RUH Attributes: Unused 00:09:33.388 RUH Usage Desc #007: RUH Attributes: Unused 00:09:33.388 00:09:33.388 FDP statistics log page 00:09:33.388 ======================= 00:09:33.388 Host bytes with metadata written: 439787520 00:09:33.388 Media bytes with metadata written: 439853056 00:09:33.388 Media bytes erased: 0 00:09:33.388 00:09:33.388 FDP events log page 00:09:33.388 =================== 00:09:33.388 Number of FDP events: 0 00:09:33.388 00:09:33.388 NVM Specific Namespace Data 00:09:33.388 =========================== 00:09:33.388 Logical Block Storage Tag Mask: 0 00:09:33.388 Protection Information Capabilities: 00:09:33.388 16b Guard Protection Information Storage Tag Support: No 00:09:33.388 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:33.388 Storage Tag Check Read Support: No 00:09:33.388 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.388 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.388 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.388 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.388 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.388 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.388 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.388 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:33.388 ************************************ 00:09:33.388 END TEST nvme_identify 00:09:33.388 ************************************ 00:09:33.388 00:09:33.388 real 0m1.620s 00:09:33.388 user 0m0.674s 00:09:33.388 sys 0m0.742s 00:09:33.388 15:17:46 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.388 15:17:46 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:33.388 15:17:46 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:33.388 15:17:46 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:33.388 15:17:46 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:33.388 15:17:46 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.388 15:17:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:33.388 ************************************ 00:09:33.388 START TEST nvme_perf 00:09:33.388 ************************************ 00:09:33.388 15:17:46 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:09:33.388 15:17:46 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:34.767 Initializing NVMe Controllers 00:09:34.767 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:34.767 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:34.767 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:34.767 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:34.767 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:34.767 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:34.767 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:34.767 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:34.767 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:34.767 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:34.767 Initialization complete. Launching workers. 00:09:34.767 ======================================================== 00:09:34.767 Latency(us) 00:09:34.767 Device Information : IOPS MiB/s Average min max 00:09:34.767 PCIE (0000:00:11.0) NSID 1 from core 0: 12222.86 143.24 10483.54 8220.72 42904.32 00:09:34.767 PCIE (0000:00:13.0) NSID 1 from core 0: 12222.86 143.24 10458.19 8178.06 40499.64 00:09:34.767 PCIE (0000:00:10.0) NSID 1 from core 0: 12222.86 143.24 10429.22 8062.40 38242.45 00:09:34.767 PCIE (0000:00:12.0) NSID 1 from core 0: 12222.86 143.24 10401.84 8181.17 35507.92 00:09:34.767 PCIE (0000:00:12.0) NSID 2 from core 0: 12222.86 143.24 10373.39 8146.27 33328.54 00:09:34.767 PCIE (0000:00:12.0) NSID 3 from core 0: 12222.86 143.24 10345.30 8171.91 30529.95 00:09:34.767 ======================================================== 00:09:34.767 Total : 73337.18 859.42 10415.25 8062.40 42904.32 00:09:34.767 00:09:34.767 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:34.767 ================================================================================= 00:09:34.767 1.00000% : 8460.102us 00:09:34.767 10.00000% : 8996.305us 00:09:34.767 25.00000% : 9592.087us 00:09:34.767 50.00000% : 10128.291us 00:09:34.767 75.00000% : 10604.916us 00:09:34.767 90.00000% : 11856.058us 00:09:34.767 95.00000% : 12511.418us 00:09:34.767 98.00000% : 14358.342us 00:09:34.767 99.00000% : 32887.156us 00:09:34.767 99.50000% : 40751.476us 00:09:34.767 99.90000% : 42657.978us 00:09:34.767 99.99000% : 42896.291us 00:09:34.767 99.99900% : 43134.604us 00:09:34.767 99.99990% : 43134.604us 00:09:34.767 99.99999% : 43134.604us 00:09:34.767 00:09:34.767 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:34.767 ================================================================================= 00:09:34.767 1.00000% : 8460.102us 00:09:34.767 10.00000% : 8996.305us 00:09:34.767 25.00000% : 9592.087us 00:09:34.767 50.00000% : 10128.291us 00:09:34.767 75.00000% : 10604.916us 00:09:34.767 90.00000% : 11796.480us 00:09:34.767 95.00000% : 12511.418us 00:09:34.767 98.00000% : 14239.185us 00:09:34.767 99.00000% : 30980.655us 00:09:34.767 99.50000% : 38368.349us 00:09:34.767 99.90000% : 40274.851us 00:09:34.767 99.99000% : 40513.164us 00:09:34.767 99.99900% : 40513.164us 00:09:34.767 99.99990% : 40513.164us 00:09:34.767 99.99999% : 40513.164us 00:09:34.767 00:09:34.767 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:34.767 ================================================================================= 00:09:34.767 1.00000% : 8400.524us 00:09:34.767 10.00000% : 8996.305us 00:09:34.767 25.00000% : 9592.087us 00:09:34.767 50.00000% : 10128.291us 00:09:34.767 75.00000% : 10604.916us 00:09:34.767 90.00000% : 11736.902us 00:09:34.767 95.00000% : 12570.996us 00:09:34.767 98.00000% : 14239.185us 00:09:34.767 99.00000% : 28240.058us 00:09:34.767 99.50000% : 35746.909us 00:09:34.767 99.90000% : 37891.724us 00:09:34.767 99.99000% : 38368.349us 00:09:34.767 99.99900% : 38368.349us 00:09:34.767 99.99990% : 38368.349us 00:09:34.767 99.99999% : 38368.349us 00:09:34.767 00:09:34.767 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:34.767 ================================================================================= 00:09:34.767 1.00000% : 8460.102us 00:09:34.767 10.00000% : 8996.305us 00:09:34.767 25.00000% : 9592.087us 00:09:34.767 50.00000% : 10128.291us 00:09:34.767 75.00000% : 10545.338us 00:09:34.767 90.00000% : 11856.058us 00:09:34.767 95.00000% : 12451.840us 00:09:34.767 98.00000% : 14239.185us 00:09:34.767 99.00000% : 25976.087us 00:09:34.767 99.50000% : 33363.782us 00:09:34.767 99.90000% : 35270.284us 00:09:34.767 99.99000% : 35508.596us 00:09:34.767 99.99900% : 35508.596us 00:09:34.767 99.99990% : 35508.596us 00:09:34.767 99.99999% : 35508.596us 00:09:34.767 00:09:34.767 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:34.767 ================================================================================= 00:09:34.767 1.00000% : 8460.102us 00:09:34.767 10.00000% : 8936.727us 00:09:34.767 25.00000% : 9592.087us 00:09:34.767 50.00000% : 10128.291us 00:09:34.767 75.00000% : 10604.916us 00:09:34.767 90.00000% : 11796.480us 00:09:34.767 95.00000% : 12511.418us 00:09:34.767 98.00000% : 14656.233us 00:09:34.767 99.00000% : 23712.116us 00:09:34.767 99.50000% : 31218.967us 00:09:34.767 99.90000% : 33125.469us 00:09:34.767 99.99000% : 33363.782us 00:09:34.767 99.99900% : 33363.782us 00:09:34.767 99.99990% : 33363.782us 00:09:34.767 99.99999% : 33363.782us 00:09:34.767 00:09:34.767 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:34.768 ================================================================================= 00:09:34.768 1.00000% : 8460.102us 00:09:34.768 10.00000% : 8996.305us 00:09:34.768 25.00000% : 9592.087us 00:09:34.768 50.00000% : 10128.291us 00:09:34.768 75.00000% : 10545.338us 00:09:34.768 90.00000% : 11796.480us 00:09:34.768 95.00000% : 12451.840us 00:09:34.768 98.00000% : 14715.811us 00:09:34.768 99.00000% : 21209.833us 00:09:34.768 99.50000% : 28359.215us 00:09:34.768 99.90000% : 30146.560us 00:09:34.768 99.99000% : 30504.029us 00:09:34.768 99.99900% : 30742.342us 00:09:34.768 99.99990% : 30742.342us 00:09:34.768 99.99999% : 30742.342us 00:09:34.768 00:09:34.768 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:34.768 ============================================================================== 00:09:34.768 Range in us Cumulative IO count 00:09:34.768 8162.211 - 8221.789: 0.0082% ( 1) 00:09:34.768 8221.789 - 8281.367: 0.1554% ( 18) 00:09:34.768 8281.367 - 8340.945: 0.3436% ( 23) 00:09:34.768 8340.945 - 8400.524: 0.6381% ( 36) 00:09:34.768 8400.524 - 8460.102: 1.1453% ( 62) 00:09:34.768 8460.102 - 8519.680: 1.9224% ( 95) 00:09:34.768 8519.680 - 8579.258: 2.8632% ( 115) 00:09:34.768 8579.258 - 8638.836: 3.8204% ( 117) 00:09:34.768 8638.836 - 8698.415: 4.8920% ( 131) 00:09:34.768 8698.415 - 8757.993: 6.0128% ( 137) 00:09:34.768 8757.993 - 8817.571: 7.2480% ( 151) 00:09:34.768 8817.571 - 8877.149: 8.4915% ( 152) 00:09:34.768 8877.149 - 8936.727: 9.7677% ( 156) 00:09:34.768 8936.727 - 8996.305: 10.9866% ( 149) 00:09:34.768 8996.305 - 9055.884: 12.3200% ( 163) 00:09:34.768 9055.884 - 9115.462: 13.5798% ( 154) 00:09:34.768 9115.462 - 9175.040: 14.8724% ( 158) 00:09:34.768 9175.040 - 9234.618: 16.0586% ( 145) 00:09:34.768 9234.618 - 9294.196: 17.3511% ( 158) 00:09:34.768 9294.196 - 9353.775: 18.6600% ( 160) 00:09:34.768 9353.775 - 9413.353: 20.0016% ( 164) 00:09:34.768 9413.353 - 9472.931: 21.6132% ( 197) 00:09:34.768 9472.931 - 9532.509: 23.4375% ( 223) 00:09:34.768 9532.509 - 9592.087: 25.3518% ( 234) 00:09:34.768 9592.087 - 9651.665: 27.3560% ( 245) 00:09:34.768 9651.665 - 9711.244: 29.5975% ( 274) 00:09:34.768 9711.244 - 9770.822: 32.1171% ( 308) 00:09:34.768 9770.822 - 9830.400: 34.9231% ( 343) 00:09:34.768 9830.400 - 9889.978: 38.0645% ( 384) 00:09:34.768 9889.978 - 9949.556: 41.3858% ( 406) 00:09:34.768 9949.556 - 10009.135: 44.8544% ( 424) 00:09:34.768 10009.135 - 10068.713: 48.3312% ( 425) 00:09:34.768 10068.713 - 10128.291: 51.7997% ( 424) 00:09:34.768 10128.291 - 10187.869: 55.4647% ( 448) 00:09:34.768 10187.869 - 10247.447: 59.0069% ( 433) 00:09:34.768 10247.447 - 10307.025: 62.5245% ( 430) 00:09:34.768 10307.025 - 10366.604: 65.8213% ( 403) 00:09:34.768 10366.604 - 10426.182: 68.7827% ( 362) 00:09:34.768 10426.182 - 10485.760: 71.5151% ( 334) 00:09:34.768 10485.760 - 10545.338: 74.2392% ( 333) 00:09:34.768 10545.338 - 10604.916: 76.7425% ( 306) 00:09:34.768 10604.916 - 10664.495: 78.9103% ( 265) 00:09:34.768 10664.495 - 10724.073: 80.6528% ( 213) 00:09:34.768 10724.073 - 10783.651: 82.1008% ( 177) 00:09:34.768 10783.651 - 10843.229: 83.2461% ( 140) 00:09:34.768 10843.229 - 10902.807: 84.1787% ( 114) 00:09:34.768 10902.807 - 10962.385: 84.9722% ( 97) 00:09:34.768 10962.385 - 11021.964: 85.5776% ( 74) 00:09:34.768 11021.964 - 11081.542: 86.1175% ( 66) 00:09:34.768 11081.542 - 11141.120: 86.5674% ( 55) 00:09:34.768 11141.120 - 11200.698: 87.0255% ( 56) 00:09:34.768 11200.698 - 11260.276: 87.4346% ( 50) 00:09:34.768 11260.276 - 11319.855: 87.7372% ( 37) 00:09:34.768 11319.855 - 11379.433: 88.0399% ( 37) 00:09:34.768 11379.433 - 11439.011: 88.2772% ( 29) 00:09:34.768 11439.011 - 11498.589: 88.4735% ( 24) 00:09:34.768 11498.589 - 11558.167: 88.6780% ( 25) 00:09:34.768 11558.167 - 11617.745: 88.9398% ( 32) 00:09:34.768 11617.745 - 11677.324: 89.1934% ( 31) 00:09:34.768 11677.324 - 11736.902: 89.5288% ( 41) 00:09:34.768 11736.902 - 11796.480: 89.8724% ( 42) 00:09:34.768 11796.480 - 11856.058: 90.2323% ( 44) 00:09:34.768 11856.058 - 11915.636: 90.6495% ( 51) 00:09:34.768 11915.636 - 11975.215: 91.0913% ( 54) 00:09:34.768 11975.215 - 12034.793: 91.5494% ( 56) 00:09:34.768 12034.793 - 12094.371: 92.0157% ( 57) 00:09:34.768 12094.371 - 12153.949: 92.4738% ( 56) 00:09:34.768 12153.949 - 12213.527: 92.9810% ( 62) 00:09:34.768 12213.527 - 12273.105: 93.4719% ( 60) 00:09:34.768 12273.105 - 12332.684: 93.9627% ( 60) 00:09:34.768 12332.684 - 12392.262: 94.4535% ( 60) 00:09:34.768 12392.262 - 12451.840: 94.8953% ( 54) 00:09:34.768 12451.840 - 12511.418: 95.3043% ( 50) 00:09:34.768 12511.418 - 12570.996: 95.6152% ( 38) 00:09:34.768 12570.996 - 12630.575: 95.9015% ( 35) 00:09:34.768 12630.575 - 12690.153: 96.1306% ( 28) 00:09:34.768 12690.153 - 12749.731: 96.3678% ( 29) 00:09:34.768 12749.731 - 12809.309: 96.5396% ( 21) 00:09:34.768 12809.309 - 12868.887: 96.6705% ( 16) 00:09:34.768 12868.887 - 12928.465: 96.7932% ( 15) 00:09:34.768 12928.465 - 12988.044: 96.8832% ( 11) 00:09:34.768 12988.044 - 13047.622: 96.9650% ( 10) 00:09:34.768 13047.622 - 13107.200: 97.0304% ( 8) 00:09:34.768 13107.200 - 13166.778: 97.0959% ( 8) 00:09:34.768 13166.778 - 13226.356: 97.1368% ( 5) 00:09:34.768 13226.356 - 13285.935: 97.1777% ( 5) 00:09:34.768 13285.935 - 13345.513: 97.2104% ( 4) 00:09:34.768 13345.513 - 13405.091: 97.2431% ( 4) 00:09:34.768 13405.091 - 13464.669: 97.3086% ( 8) 00:09:34.768 13464.669 - 13524.247: 97.3740% ( 8) 00:09:34.768 13524.247 - 13583.825: 97.4476% ( 9) 00:09:34.768 13583.825 - 13643.404: 97.4967% ( 6) 00:09:34.768 13643.404 - 13702.982: 97.5540% ( 7) 00:09:34.768 13702.982 - 13762.560: 97.5867% ( 4) 00:09:34.768 13762.560 - 13822.138: 97.6113% ( 3) 00:09:34.768 13822.138 - 13881.716: 97.6522% ( 5) 00:09:34.768 13881.716 - 13941.295: 97.6849% ( 4) 00:09:34.768 13941.295 - 14000.873: 97.7176% ( 4) 00:09:34.768 14000.873 - 14060.451: 97.7503% ( 4) 00:09:34.768 14060.451 - 14120.029: 97.7994% ( 6) 00:09:34.768 14120.029 - 14179.607: 97.8649% ( 8) 00:09:34.768 14179.607 - 14239.185: 97.9303% ( 8) 00:09:34.768 14239.185 - 14298.764: 97.9876% ( 7) 00:09:34.768 14298.764 - 14358.342: 98.0448% ( 7) 00:09:34.768 14358.342 - 14417.920: 98.0776% ( 4) 00:09:34.768 14417.920 - 14477.498: 98.1103% ( 4) 00:09:34.768 14477.498 - 14537.076: 98.1430% ( 4) 00:09:34.768 14537.076 - 14596.655: 98.1757% ( 4) 00:09:34.768 14596.655 - 14656.233: 98.2166% ( 5) 00:09:34.768 14656.233 - 14715.811: 98.2493% ( 4) 00:09:34.768 14715.811 - 14775.389: 98.2739% ( 3) 00:09:34.768 14775.389 - 14834.967: 98.3066% ( 4) 00:09:34.768 14834.967 - 14894.545: 98.3475% ( 5) 00:09:34.768 14894.545 - 14954.124: 98.3802% ( 4) 00:09:34.768 14954.124 - 15013.702: 98.3966% ( 2) 00:09:34.768 15013.702 - 15073.280: 98.4211% ( 3) 00:09:34.768 15073.280 - 15132.858: 98.4539% ( 4) 00:09:34.768 15132.858 - 15192.436: 98.4784% ( 3) 00:09:34.768 15192.436 - 15252.015: 98.5029% ( 3) 00:09:34.768 15252.015 - 15371.171: 98.5766% ( 9) 00:09:34.768 15371.171 - 15490.327: 98.6338% ( 7) 00:09:34.768 15490.327 - 15609.484: 98.6993% ( 8) 00:09:34.768 15609.484 - 15728.640: 98.7484% ( 6) 00:09:34.768 15728.640 - 15847.796: 98.8138% ( 8) 00:09:34.768 15847.796 - 15966.953: 98.8874% ( 9) 00:09:34.768 15966.953 - 16086.109: 98.9529% ( 8) 00:09:34.768 32410.531 - 32648.844: 98.9856% ( 4) 00:09:34.768 32648.844 - 32887.156: 99.0347% ( 6) 00:09:34.768 32887.156 - 33125.469: 99.0838% ( 6) 00:09:34.768 33125.469 - 33363.782: 99.1329% ( 6) 00:09:34.768 33363.782 - 33602.095: 99.1819% ( 6) 00:09:34.768 33602.095 - 33840.407: 99.2392% ( 7) 00:09:34.768 33840.407 - 34078.720: 99.2883% ( 6) 00:09:34.768 34078.720 - 34317.033: 99.3374% ( 6) 00:09:34.768 34317.033 - 34555.345: 99.3946% ( 7) 00:09:34.768 34555.345 - 34793.658: 99.4437% ( 6) 00:09:34.768 34793.658 - 35031.971: 99.4764% ( 4) 00:09:34.768 40274.851 - 40513.164: 99.4928% ( 2) 00:09:34.768 40513.164 - 40751.476: 99.5419% ( 6) 00:09:34.768 40751.476 - 40989.789: 99.5910% ( 6) 00:09:34.768 40989.789 - 41228.102: 99.6401% ( 6) 00:09:34.768 41228.102 - 41466.415: 99.6891% ( 6) 00:09:34.768 41466.415 - 41704.727: 99.7382% ( 6) 00:09:34.768 41704.727 - 41943.040: 99.7873% ( 6) 00:09:34.768 41943.040 - 42181.353: 99.8446% ( 7) 00:09:34.768 42181.353 - 42419.665: 99.8937% ( 6) 00:09:34.768 42419.665 - 42657.978: 99.9427% ( 6) 00:09:34.768 42657.978 - 42896.291: 99.9918% ( 6) 00:09:34.768 42896.291 - 43134.604: 100.0000% ( 1) 00:09:34.768 00:09:34.768 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:34.768 ============================================================================== 00:09:34.768 Range in us Cumulative IO count 00:09:34.768 8162.211 - 8221.789: 0.0491% ( 6) 00:09:34.768 8221.789 - 8281.367: 0.1227% ( 9) 00:09:34.768 8281.367 - 8340.945: 0.3518% ( 28) 00:09:34.768 8340.945 - 8400.524: 0.8017% ( 55) 00:09:34.768 8400.524 - 8460.102: 1.3662% ( 69) 00:09:34.768 8460.102 - 8519.680: 2.0861% ( 88) 00:09:34.768 8519.680 - 8579.258: 2.9614% ( 107) 00:09:34.768 8579.258 - 8638.836: 3.9594% ( 122) 00:09:34.768 8638.836 - 8698.415: 4.9493% ( 121) 00:09:34.768 8698.415 - 8757.993: 6.0373% ( 133) 00:09:34.768 8757.993 - 8817.571: 7.2644% ( 150) 00:09:34.768 8817.571 - 8877.149: 8.4179% ( 141) 00:09:34.768 8877.149 - 8936.727: 9.6859% ( 155) 00:09:34.768 8936.727 - 8996.305: 10.9620% ( 156) 00:09:34.768 8996.305 - 9055.884: 12.2709% ( 160) 00:09:34.768 9055.884 - 9115.462: 13.6289% ( 166) 00:09:34.768 9115.462 - 9175.040: 14.9542% ( 162) 00:09:34.768 9175.040 - 9234.618: 16.2467% ( 158) 00:09:34.768 9234.618 - 9294.196: 17.4820% ( 151) 00:09:34.768 9294.196 - 9353.775: 18.7664% ( 157) 00:09:34.768 9353.775 - 9413.353: 20.1162% ( 165) 00:09:34.768 9413.353 - 9472.931: 21.5887% ( 180) 00:09:34.768 9472.931 - 9532.509: 23.2821% ( 207) 00:09:34.768 9532.509 - 9592.087: 25.1391% ( 227) 00:09:34.769 9592.087 - 9651.665: 27.2660% ( 260) 00:09:34.769 9651.665 - 9711.244: 29.4830% ( 271) 00:09:34.769 9711.244 - 9770.822: 32.0681% ( 316) 00:09:34.769 9770.822 - 9830.400: 34.9313% ( 350) 00:09:34.769 9830.400 - 9889.978: 38.1217% ( 390) 00:09:34.769 9889.978 - 9949.556: 41.4921% ( 412) 00:09:34.769 9949.556 - 10009.135: 44.8380% ( 409) 00:09:34.769 10009.135 - 10068.713: 48.2657% ( 419) 00:09:34.769 10068.713 - 10128.291: 51.8161% ( 434) 00:09:34.769 10128.291 - 10187.869: 55.4238% ( 441) 00:09:34.769 10187.869 - 10247.447: 58.9905% ( 436) 00:09:34.769 10247.447 - 10307.025: 62.4755% ( 426) 00:09:34.769 10307.025 - 10366.604: 65.7641% ( 402) 00:09:34.769 10366.604 - 10426.182: 68.8645% ( 379) 00:09:34.769 10426.182 - 10485.760: 71.6705% ( 343) 00:09:34.769 10485.760 - 10545.338: 74.3455% ( 327) 00:09:34.769 10545.338 - 10604.916: 76.7834% ( 298) 00:09:34.769 10604.916 - 10664.495: 78.9594% ( 266) 00:09:34.769 10664.495 - 10724.073: 80.7592% ( 220) 00:09:34.769 10724.073 - 10783.651: 82.2399% ( 181) 00:09:34.769 10783.651 - 10843.229: 83.4833% ( 152) 00:09:34.769 10843.229 - 10902.807: 84.4486% ( 118) 00:09:34.769 10902.807 - 10962.385: 85.1522% ( 86) 00:09:34.769 10962.385 - 11021.964: 85.7412% ( 72) 00:09:34.769 11021.964 - 11081.542: 86.2075% ( 57) 00:09:34.769 11081.542 - 11141.120: 86.6656% ( 56) 00:09:34.769 11141.120 - 11200.698: 87.0664% ( 49) 00:09:34.769 11200.698 - 11260.276: 87.3855% ( 39) 00:09:34.769 11260.276 - 11319.855: 87.6800% ( 36) 00:09:34.769 11319.855 - 11379.433: 87.9336% ( 31) 00:09:34.769 11379.433 - 11439.011: 88.1708% ( 29) 00:09:34.769 11439.011 - 11498.589: 88.4162% ( 30) 00:09:34.769 11498.589 - 11558.167: 88.7435% ( 40) 00:09:34.769 11558.167 - 11617.745: 89.0216% ( 34) 00:09:34.769 11617.745 - 11677.324: 89.3570% ( 41) 00:09:34.769 11677.324 - 11736.902: 89.7088% ( 43) 00:09:34.769 11736.902 - 11796.480: 90.0851% ( 46) 00:09:34.769 11796.480 - 11856.058: 90.5268% ( 54) 00:09:34.769 11856.058 - 11915.636: 90.9768% ( 55) 00:09:34.769 11915.636 - 11975.215: 91.4512% ( 58) 00:09:34.769 11975.215 - 12034.793: 91.9012% ( 55) 00:09:34.769 12034.793 - 12094.371: 92.3757% ( 58) 00:09:34.769 12094.371 - 12153.949: 92.8665% ( 60) 00:09:34.769 12153.949 - 12213.527: 93.3328% ( 57) 00:09:34.769 12213.527 - 12273.105: 93.7909% ( 56) 00:09:34.769 12273.105 - 12332.684: 94.2081% ( 51) 00:09:34.769 12332.684 - 12392.262: 94.6335% ( 52) 00:09:34.769 12392.262 - 12451.840: 94.9935% ( 44) 00:09:34.769 12451.840 - 12511.418: 95.3125% ( 39) 00:09:34.769 12511.418 - 12570.996: 95.5906% ( 34) 00:09:34.769 12570.996 - 12630.575: 95.8688% ( 34) 00:09:34.769 12630.575 - 12690.153: 96.0815% ( 26) 00:09:34.769 12690.153 - 12749.731: 96.2778% ( 24) 00:09:34.769 12749.731 - 12809.309: 96.4087% ( 16) 00:09:34.769 12809.309 - 12868.887: 96.5314% ( 15) 00:09:34.769 12868.887 - 12928.465: 96.6296% ( 12) 00:09:34.769 12928.465 - 12988.044: 96.7114% ( 10) 00:09:34.769 12988.044 - 13047.622: 96.7932% ( 10) 00:09:34.769 13047.622 - 13107.200: 96.8505% ( 7) 00:09:34.769 13107.200 - 13166.778: 96.9241% ( 9) 00:09:34.769 13166.778 - 13226.356: 97.0059% ( 10) 00:09:34.769 13226.356 - 13285.935: 97.1122% ( 13) 00:09:34.769 13285.935 - 13345.513: 97.2022% ( 11) 00:09:34.769 13345.513 - 13405.091: 97.2759% ( 9) 00:09:34.769 13405.091 - 13464.669: 97.3413% ( 8) 00:09:34.769 13464.669 - 13524.247: 97.4149% ( 9) 00:09:34.769 13524.247 - 13583.825: 97.4804% ( 8) 00:09:34.769 13583.825 - 13643.404: 97.5295% ( 6) 00:09:34.769 13643.404 - 13702.982: 97.5949% ( 8) 00:09:34.769 13702.982 - 13762.560: 97.6440% ( 6) 00:09:34.769 13762.560 - 13822.138: 97.6931% ( 6) 00:09:34.769 13822.138 - 13881.716: 97.7503% ( 7) 00:09:34.769 13881.716 - 13941.295: 97.8158% ( 8) 00:09:34.769 13941.295 - 14000.873: 97.8485% ( 4) 00:09:34.769 14000.873 - 14060.451: 97.8894% ( 5) 00:09:34.769 14060.451 - 14120.029: 97.9385% ( 6) 00:09:34.769 14120.029 - 14179.607: 97.9794% ( 5) 00:09:34.769 14179.607 - 14239.185: 98.0121% ( 4) 00:09:34.769 14239.185 - 14298.764: 98.0366% ( 3) 00:09:34.769 14298.764 - 14358.342: 98.0694% ( 4) 00:09:34.769 14358.342 - 14417.920: 98.1021% ( 4) 00:09:34.769 14417.920 - 14477.498: 98.1348% ( 4) 00:09:34.769 14477.498 - 14537.076: 98.1757% ( 5) 00:09:34.769 14537.076 - 14596.655: 98.2084% ( 4) 00:09:34.769 14596.655 - 14656.233: 98.2412% ( 4) 00:09:34.769 14656.233 - 14715.811: 98.2739% ( 4) 00:09:34.769 14715.811 - 14775.389: 98.3066% ( 4) 00:09:34.769 14775.389 - 14834.967: 98.3393% ( 4) 00:09:34.769 14834.967 - 14894.545: 98.3721% ( 4) 00:09:34.769 14894.545 - 14954.124: 98.4048% ( 4) 00:09:34.769 14954.124 - 15013.702: 98.4293% ( 3) 00:09:34.769 15132.858 - 15192.436: 98.4375% ( 1) 00:09:34.769 15192.436 - 15252.015: 98.4620% ( 3) 00:09:34.769 15252.015 - 15371.171: 98.5193% ( 7) 00:09:34.769 15371.171 - 15490.327: 98.5848% ( 8) 00:09:34.769 15490.327 - 15609.484: 98.6420% ( 7) 00:09:34.769 15609.484 - 15728.640: 98.7156% ( 9) 00:09:34.769 15728.640 - 15847.796: 98.7729% ( 7) 00:09:34.769 15847.796 - 15966.953: 98.8465% ( 9) 00:09:34.769 15966.953 - 16086.109: 98.8956% ( 6) 00:09:34.769 16086.109 - 16205.265: 98.9529% ( 7) 00:09:34.769 30504.029 - 30742.342: 98.9856% ( 4) 00:09:34.769 30742.342 - 30980.655: 99.0347% ( 6) 00:09:34.769 30980.655 - 31218.967: 99.0838% ( 6) 00:09:34.769 31218.967 - 31457.280: 99.1329% ( 6) 00:09:34.769 31457.280 - 31695.593: 99.1819% ( 6) 00:09:34.769 31695.593 - 31933.905: 99.2310% ( 6) 00:09:34.769 31933.905 - 32172.218: 99.2883% ( 7) 00:09:34.769 32172.218 - 32410.531: 99.3292% ( 5) 00:09:34.769 32410.531 - 32648.844: 99.3865% ( 7) 00:09:34.769 32648.844 - 32887.156: 99.4355% ( 6) 00:09:34.769 32887.156 - 33125.469: 99.4764% ( 5) 00:09:34.769 37891.724 - 38130.036: 99.4846% ( 1) 00:09:34.769 38130.036 - 38368.349: 99.5337% ( 6) 00:09:34.769 38368.349 - 38606.662: 99.5910% ( 7) 00:09:34.769 38606.662 - 38844.975: 99.6319% ( 5) 00:09:34.769 38844.975 - 39083.287: 99.6891% ( 7) 00:09:34.769 39083.287 - 39321.600: 99.7382% ( 6) 00:09:34.769 39321.600 - 39559.913: 99.7873% ( 6) 00:09:34.769 39559.913 - 39798.225: 99.8446% ( 7) 00:09:34.769 39798.225 - 40036.538: 99.8937% ( 6) 00:09:34.769 40036.538 - 40274.851: 99.9509% ( 7) 00:09:34.769 40274.851 - 40513.164: 100.0000% ( 6) 00:09:34.769 00:09:34.769 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:34.769 ============================================================================== 00:09:34.769 Range in us Cumulative IO count 00:09:34.769 8043.055 - 8102.633: 0.0245% ( 3) 00:09:34.769 8102.633 - 8162.211: 0.0573% ( 4) 00:09:34.769 8162.211 - 8221.789: 0.1473% ( 11) 00:09:34.769 8221.789 - 8281.367: 0.4827% ( 41) 00:09:34.769 8281.367 - 8340.945: 0.8999% ( 51) 00:09:34.769 8340.945 - 8400.524: 1.5380% ( 78) 00:09:34.769 8400.524 - 8460.102: 2.2170% ( 83) 00:09:34.769 8460.102 - 8519.680: 3.0350% ( 100) 00:09:34.769 8519.680 - 8579.258: 3.8858% ( 104) 00:09:34.769 8579.258 - 8638.836: 4.7611% ( 107) 00:09:34.769 8638.836 - 8698.415: 5.7264% ( 118) 00:09:34.769 8698.415 - 8757.993: 6.8063% ( 132) 00:09:34.769 8757.993 - 8817.571: 7.8043% ( 122) 00:09:34.769 8817.571 - 8877.149: 8.8923% ( 133) 00:09:34.769 8877.149 - 8936.727: 9.9885% ( 134) 00:09:34.769 8936.727 - 8996.305: 11.0602% ( 131) 00:09:34.769 8996.305 - 9055.884: 12.1401% ( 132) 00:09:34.769 9055.884 - 9115.462: 13.3017% ( 142) 00:09:34.769 9115.462 - 9175.040: 14.4715% ( 143) 00:09:34.769 9175.040 - 9234.618: 15.7886% ( 161) 00:09:34.769 9234.618 - 9294.196: 17.2202% ( 175) 00:09:34.769 9294.196 - 9353.775: 18.6846% ( 179) 00:09:34.769 9353.775 - 9413.353: 20.2961% ( 197) 00:09:34.769 9413.353 - 9472.931: 22.1122% ( 222) 00:09:34.769 9472.931 - 9532.509: 24.1083% ( 244) 00:09:34.769 9532.509 - 9592.087: 26.1044% ( 244) 00:09:34.769 9592.087 - 9651.665: 28.5259% ( 296) 00:09:34.769 9651.665 - 9711.244: 31.0864% ( 313) 00:09:34.769 9711.244 - 9770.822: 33.9332% ( 348) 00:09:34.769 9770.822 - 9830.400: 36.8455% ( 356) 00:09:34.769 9830.400 - 9889.978: 39.8478% ( 367) 00:09:34.769 9889.978 - 9949.556: 42.9074% ( 374) 00:09:34.769 9949.556 - 10009.135: 45.9424% ( 371) 00:09:34.769 10009.135 - 10068.713: 49.1656% ( 394) 00:09:34.769 10068.713 - 10128.291: 52.2660% ( 379) 00:09:34.769 10128.291 - 10187.869: 55.4401% ( 388) 00:09:34.769 10187.869 - 10247.447: 58.6387% ( 391) 00:09:34.769 10247.447 - 10307.025: 61.6738% ( 371) 00:09:34.769 10307.025 - 10366.604: 64.8642% ( 390) 00:09:34.769 10366.604 - 10426.182: 67.6620% ( 342) 00:09:34.769 10426.182 - 10485.760: 70.6315% ( 363) 00:09:34.769 10485.760 - 10545.338: 73.1839% ( 312) 00:09:34.769 10545.338 - 10604.916: 75.5890% ( 294) 00:09:34.769 10604.916 - 10664.495: 77.7405% ( 263) 00:09:34.769 10664.495 - 10724.073: 79.6875% ( 238) 00:09:34.769 10724.073 - 10783.651: 81.4709% ( 218) 00:09:34.769 10783.651 - 10843.229: 82.9270% ( 178) 00:09:34.769 10843.229 - 10902.807: 84.1214% ( 146) 00:09:34.769 10902.807 - 10962.385: 85.0049% ( 108) 00:09:34.769 10962.385 - 11021.964: 85.6675% ( 81) 00:09:34.769 11021.964 - 11081.542: 86.2402% ( 70) 00:09:34.769 11081.542 - 11141.120: 86.6819% ( 54) 00:09:34.769 11141.120 - 11200.698: 87.0419% ( 44) 00:09:34.769 11200.698 - 11260.276: 87.3855% ( 42) 00:09:34.769 11260.276 - 11319.855: 87.7045% ( 39) 00:09:34.769 11319.855 - 11379.433: 87.9990% ( 36) 00:09:34.769 11379.433 - 11439.011: 88.3344% ( 41) 00:09:34.769 11439.011 - 11498.589: 88.6371% ( 37) 00:09:34.769 11498.589 - 11558.167: 88.9562% ( 39) 00:09:34.769 11558.167 - 11617.745: 89.3325% ( 46) 00:09:34.769 11617.745 - 11677.324: 89.7170% ( 47) 00:09:34.769 11677.324 - 11736.902: 90.0769% ( 44) 00:09:34.769 11736.902 - 11796.480: 90.4287% ( 43) 00:09:34.769 11796.480 - 11856.058: 90.8377% ( 50) 00:09:34.769 11856.058 - 11915.636: 91.1976% ( 44) 00:09:34.769 11915.636 - 11975.215: 91.5330% ( 41) 00:09:34.770 11975.215 - 12034.793: 91.9094% ( 46) 00:09:34.770 12034.793 - 12094.371: 92.2611% ( 43) 00:09:34.770 12094.371 - 12153.949: 92.6374% ( 46) 00:09:34.770 12153.949 - 12213.527: 93.0710% ( 53) 00:09:34.770 12213.527 - 12273.105: 93.4473% ( 46) 00:09:34.770 12273.105 - 12332.684: 93.8563% ( 50) 00:09:34.770 12332.684 - 12392.262: 94.2245% ( 45) 00:09:34.770 12392.262 - 12451.840: 94.6090% ( 47) 00:09:34.770 12451.840 - 12511.418: 94.9689% ( 44) 00:09:34.770 12511.418 - 12570.996: 95.3043% ( 41) 00:09:34.770 12570.996 - 12630.575: 95.5906% ( 35) 00:09:34.770 12630.575 - 12690.153: 95.7952% ( 25) 00:09:34.770 12690.153 - 12749.731: 95.9342% ( 17) 00:09:34.770 12749.731 - 12809.309: 96.0815% ( 18) 00:09:34.770 12809.309 - 12868.887: 96.2205% ( 17) 00:09:34.770 12868.887 - 12928.465: 96.3760% ( 19) 00:09:34.770 12928.465 - 12988.044: 96.4660% ( 11) 00:09:34.770 12988.044 - 13047.622: 96.5969% ( 16) 00:09:34.770 13047.622 - 13107.200: 96.7032% ( 13) 00:09:34.770 13107.200 - 13166.778: 96.8259% ( 15) 00:09:34.770 13166.778 - 13226.356: 96.9159% ( 11) 00:09:34.770 13226.356 - 13285.935: 96.9895% ( 9) 00:09:34.770 13285.935 - 13345.513: 97.0304% ( 5) 00:09:34.770 13345.513 - 13405.091: 97.0795% ( 6) 00:09:34.770 13405.091 - 13464.669: 97.1368% ( 7) 00:09:34.770 13464.669 - 13524.247: 97.1940% ( 7) 00:09:34.770 13524.247 - 13583.825: 97.2513% ( 7) 00:09:34.770 13583.825 - 13643.404: 97.3168% ( 8) 00:09:34.770 13643.404 - 13702.982: 97.3986% ( 10) 00:09:34.770 13702.982 - 13762.560: 97.4967% ( 12) 00:09:34.770 13762.560 - 13822.138: 97.5704% ( 9) 00:09:34.770 13822.138 - 13881.716: 97.6849% ( 14) 00:09:34.770 13881.716 - 13941.295: 97.7258% ( 5) 00:09:34.770 13941.295 - 14000.873: 97.7912% ( 8) 00:09:34.770 14000.873 - 14060.451: 97.8567% ( 8) 00:09:34.770 14060.451 - 14120.029: 97.9139% ( 7) 00:09:34.770 14120.029 - 14179.607: 97.9712% ( 7) 00:09:34.770 14179.607 - 14239.185: 98.0203% ( 6) 00:09:34.770 14239.185 - 14298.764: 98.0857% ( 8) 00:09:34.770 14298.764 - 14358.342: 98.1430% ( 7) 00:09:34.770 14358.342 - 14417.920: 98.1921% ( 6) 00:09:34.770 14417.920 - 14477.498: 98.2575% ( 8) 00:09:34.770 14477.498 - 14537.076: 98.3066% ( 6) 00:09:34.770 14537.076 - 14596.655: 98.3312% ( 3) 00:09:34.770 14596.655 - 14656.233: 98.3884% ( 7) 00:09:34.770 14656.233 - 14715.811: 98.4048% ( 2) 00:09:34.770 14715.811 - 14775.389: 98.4211% ( 2) 00:09:34.770 14775.389 - 14834.967: 98.4293% ( 1) 00:09:34.770 15252.015 - 15371.171: 98.4702% ( 5) 00:09:34.770 15371.171 - 15490.327: 98.5357% ( 8) 00:09:34.770 15490.327 - 15609.484: 98.5848% ( 6) 00:09:34.770 15609.484 - 15728.640: 98.6257% ( 5) 00:09:34.770 15728.640 - 15847.796: 98.6911% ( 8) 00:09:34.770 15847.796 - 15966.953: 98.7484% ( 7) 00:09:34.770 15966.953 - 16086.109: 98.7974% ( 6) 00:09:34.770 16086.109 - 16205.265: 98.8384% ( 5) 00:09:34.770 16205.265 - 16324.422: 98.9038% ( 8) 00:09:34.770 16324.422 - 16443.578: 98.9365% ( 4) 00:09:34.770 16443.578 - 16562.735: 98.9529% ( 2) 00:09:34.770 27882.589 - 28001.745: 98.9611% ( 1) 00:09:34.770 28001.745 - 28120.902: 98.9938% ( 4) 00:09:34.770 28120.902 - 28240.058: 99.0101% ( 2) 00:09:34.770 28240.058 - 28359.215: 99.0347% ( 3) 00:09:34.770 28359.215 - 28478.371: 99.0592% ( 3) 00:09:34.770 28478.371 - 28597.527: 99.0838% ( 3) 00:09:34.770 28597.527 - 28716.684: 99.1001% ( 2) 00:09:34.770 28716.684 - 28835.840: 99.1247% ( 3) 00:09:34.770 28835.840 - 28954.996: 99.1492% ( 3) 00:09:34.770 28954.996 - 29074.153: 99.1656% ( 2) 00:09:34.770 29074.153 - 29193.309: 99.1901% ( 3) 00:09:34.770 29193.309 - 29312.465: 99.2147% ( 3) 00:09:34.770 29312.465 - 29431.622: 99.2310% ( 2) 00:09:34.770 29431.622 - 29550.778: 99.2474% ( 2) 00:09:34.770 29550.778 - 29669.935: 99.2719% ( 3) 00:09:34.770 29669.935 - 29789.091: 99.3046% ( 4) 00:09:34.770 29789.091 - 29908.247: 99.3292% ( 3) 00:09:34.770 29908.247 - 30027.404: 99.3537% ( 3) 00:09:34.770 30027.404 - 30146.560: 99.3783% ( 3) 00:09:34.770 30146.560 - 30265.716: 99.3946% ( 2) 00:09:34.770 30265.716 - 30384.873: 99.4192% ( 3) 00:09:34.770 30384.873 - 30504.029: 99.4519% ( 4) 00:09:34.770 30504.029 - 30742.342: 99.4764% ( 3) 00:09:34.770 35270.284 - 35508.596: 99.4846% ( 1) 00:09:34.770 35508.596 - 35746.909: 99.5255% ( 5) 00:09:34.770 35746.909 - 35985.222: 99.5664% ( 5) 00:09:34.770 35985.222 - 36223.535: 99.6073% ( 5) 00:09:34.770 36223.535 - 36461.847: 99.6564% ( 6) 00:09:34.770 36461.847 - 36700.160: 99.6973% ( 5) 00:09:34.770 36700.160 - 36938.473: 99.7382% ( 5) 00:09:34.770 36938.473 - 37176.785: 99.7873% ( 6) 00:09:34.770 37176.785 - 37415.098: 99.8282% ( 5) 00:09:34.770 37415.098 - 37653.411: 99.8773% ( 6) 00:09:34.770 37653.411 - 37891.724: 99.9264% ( 6) 00:09:34.770 37891.724 - 38130.036: 99.9673% ( 5) 00:09:34.770 38130.036 - 38368.349: 100.0000% ( 4) 00:09:34.770 00:09:34.770 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:34.770 ============================================================================== 00:09:34.770 Range in us Cumulative IO count 00:09:34.770 8162.211 - 8221.789: 0.0245% ( 3) 00:09:34.770 8221.789 - 8281.367: 0.0982% ( 9) 00:09:34.770 8281.367 - 8340.945: 0.2618% ( 20) 00:09:34.770 8340.945 - 8400.524: 0.7117% ( 55) 00:09:34.770 8400.524 - 8460.102: 1.2271% ( 63) 00:09:34.770 8460.102 - 8519.680: 1.9634% ( 90) 00:09:34.770 8519.680 - 8579.258: 2.8796% ( 112) 00:09:34.770 8579.258 - 8638.836: 3.8531% ( 119) 00:09:34.770 8638.836 - 8698.415: 4.9329% ( 132) 00:09:34.770 8698.415 - 8757.993: 6.0373% ( 135) 00:09:34.770 8757.993 - 8817.571: 7.2153% ( 144) 00:09:34.770 8817.571 - 8877.149: 8.4506% ( 151) 00:09:34.770 8877.149 - 8936.727: 9.7022% ( 153) 00:09:34.770 8936.727 - 8996.305: 10.8966% ( 146) 00:09:34.770 8996.305 - 9055.884: 12.1564% ( 154) 00:09:34.770 9055.884 - 9115.462: 13.5226% ( 167) 00:09:34.770 9115.462 - 9175.040: 14.8151% ( 158) 00:09:34.770 9175.040 - 9234.618: 16.1322% ( 161) 00:09:34.770 9234.618 - 9294.196: 17.3675% ( 151) 00:09:34.770 9294.196 - 9353.775: 18.5618% ( 146) 00:09:34.770 9353.775 - 9413.353: 19.9198% ( 166) 00:09:34.770 9413.353 - 9472.931: 21.5232% ( 196) 00:09:34.770 9472.931 - 9532.509: 23.2575% ( 212) 00:09:34.770 9532.509 - 9592.087: 25.2209% ( 240) 00:09:34.770 9592.087 - 9651.665: 27.3069% ( 255) 00:09:34.770 9651.665 - 9711.244: 29.6466% ( 286) 00:09:34.770 9711.244 - 9770.822: 32.1580% ( 307) 00:09:34.770 9770.822 - 9830.400: 34.8822% ( 333) 00:09:34.770 9830.400 - 9889.978: 38.0236% ( 384) 00:09:34.770 9889.978 - 9949.556: 41.2467% ( 394) 00:09:34.770 9949.556 - 10009.135: 44.5926% ( 409) 00:09:34.770 10009.135 - 10068.713: 48.1348% ( 433) 00:09:34.770 10068.713 - 10128.291: 51.7752% ( 445) 00:09:34.770 10128.291 - 10187.869: 55.5137% ( 457) 00:09:34.770 10187.869 - 10247.447: 59.2277% ( 454) 00:09:34.770 10247.447 - 10307.025: 62.7045% ( 425) 00:09:34.770 10307.025 - 10366.604: 66.0831% ( 413) 00:09:34.770 10366.604 - 10426.182: 69.3799% ( 403) 00:09:34.770 10426.182 - 10485.760: 72.4722% ( 378) 00:09:34.770 10485.760 - 10545.338: 75.1227% ( 324) 00:09:34.770 10545.338 - 10604.916: 77.5769% ( 300) 00:09:34.770 10604.916 - 10664.495: 79.7529% ( 266) 00:09:34.770 10664.495 - 10724.073: 81.5772% ( 223) 00:09:34.770 10724.073 - 10783.651: 83.0007% ( 174) 00:09:34.770 10783.651 - 10843.229: 84.1705% ( 143) 00:09:34.770 10843.229 - 10902.807: 85.1440% ( 119) 00:09:34.770 10902.807 - 10962.385: 85.9784% ( 102) 00:09:34.770 10962.385 - 11021.964: 86.5020% ( 64) 00:09:34.770 11021.964 - 11081.542: 86.9274% ( 52) 00:09:34.770 11081.542 - 11141.120: 87.2873% ( 44) 00:09:34.770 11141.120 - 11200.698: 87.6473% ( 44) 00:09:34.770 11200.698 - 11260.276: 87.9581% ( 38) 00:09:34.770 11260.276 - 11319.855: 88.2117% ( 31) 00:09:34.770 11319.855 - 11379.433: 88.3590% ( 18) 00:09:34.770 11379.433 - 11439.011: 88.5062% ( 18) 00:09:34.770 11439.011 - 11498.589: 88.6535% ( 18) 00:09:34.770 11498.589 - 11558.167: 88.7844% ( 16) 00:09:34.770 11558.167 - 11617.745: 88.9971% ( 26) 00:09:34.770 11617.745 - 11677.324: 89.2916% ( 36) 00:09:34.770 11677.324 - 11736.902: 89.6106% ( 39) 00:09:34.770 11736.902 - 11796.480: 89.9542% ( 42) 00:09:34.770 11796.480 - 11856.058: 90.3714% ( 51) 00:09:34.770 11856.058 - 11915.636: 90.7804% ( 50) 00:09:34.770 11915.636 - 11975.215: 91.2304% ( 55) 00:09:34.770 11975.215 - 12034.793: 91.6639% ( 53) 00:09:34.770 12034.793 - 12094.371: 92.1548% ( 60) 00:09:34.770 12094.371 - 12153.949: 92.6783% ( 64) 00:09:34.770 12153.949 - 12213.527: 93.1774% ( 61) 00:09:34.770 12213.527 - 12273.105: 93.6927% ( 63) 00:09:34.770 12273.105 - 12332.684: 94.2245% ( 65) 00:09:34.770 12332.684 - 12392.262: 94.7071% ( 59) 00:09:34.770 12392.262 - 12451.840: 95.1571% ( 55) 00:09:34.770 12451.840 - 12511.418: 95.5088% ( 43) 00:09:34.770 12511.418 - 12570.996: 95.7952% ( 35) 00:09:34.770 12570.996 - 12630.575: 96.0324% ( 29) 00:09:34.770 12630.575 - 12690.153: 96.2205% ( 23) 00:09:34.770 12690.153 - 12749.731: 96.3842% ( 20) 00:09:34.770 12749.731 - 12809.309: 96.5151% ( 16) 00:09:34.770 12809.309 - 12868.887: 96.6296% ( 14) 00:09:34.770 12868.887 - 12928.465: 96.7277% ( 12) 00:09:34.770 12928.465 - 12988.044: 96.7850% ( 7) 00:09:34.770 12988.044 - 13047.622: 96.8259% ( 5) 00:09:34.770 13047.622 - 13107.200: 96.8505% ( 3) 00:09:34.770 13107.200 - 13166.778: 96.8586% ( 1) 00:09:34.770 13464.669 - 13524.247: 96.8750% ( 2) 00:09:34.770 13524.247 - 13583.825: 96.9159% ( 5) 00:09:34.770 13583.825 - 13643.404: 97.0223% ( 13) 00:09:34.770 13643.404 - 13702.982: 97.1122% ( 11) 00:09:34.770 13702.982 - 13762.560: 97.2104% ( 12) 00:09:34.770 13762.560 - 13822.138: 97.3168% ( 13) 00:09:34.770 13822.138 - 13881.716: 97.4149% ( 12) 00:09:34.770 13881.716 - 13941.295: 97.5131% ( 12) 00:09:34.770 13941.295 - 14000.873: 97.6031% ( 11) 00:09:34.770 14000.873 - 14060.451: 97.7012% ( 12) 00:09:34.771 14060.451 - 14120.029: 97.7994% ( 12) 00:09:34.771 14120.029 - 14179.607: 97.8976% ( 12) 00:09:34.771 14179.607 - 14239.185: 98.0039% ( 13) 00:09:34.771 14239.185 - 14298.764: 98.1103% ( 13) 00:09:34.771 14298.764 - 14358.342: 98.2084% ( 12) 00:09:34.771 14358.342 - 14417.920: 98.2984% ( 11) 00:09:34.771 14417.920 - 14477.498: 98.3639% ( 8) 00:09:34.771 14477.498 - 14537.076: 98.4048% ( 5) 00:09:34.771 14537.076 - 14596.655: 98.4211% ( 2) 00:09:34.771 14596.655 - 14656.233: 98.4293% ( 1) 00:09:34.771 15073.280 - 15132.858: 98.4539% ( 3) 00:09:34.771 15132.858 - 15192.436: 98.4948% ( 5) 00:09:34.771 15192.436 - 15252.015: 98.5275% ( 4) 00:09:34.771 15252.015 - 15371.171: 98.5929% ( 8) 00:09:34.771 15371.171 - 15490.327: 98.6502% ( 7) 00:09:34.771 15490.327 - 15609.484: 98.7156% ( 8) 00:09:34.771 15609.484 - 15728.640: 98.7811% ( 8) 00:09:34.771 15728.640 - 15847.796: 98.8547% ( 9) 00:09:34.771 15847.796 - 15966.953: 98.9202% ( 8) 00:09:34.771 15966.953 - 16086.109: 98.9529% ( 4) 00:09:34.771 25499.462 - 25618.618: 98.9611% ( 1) 00:09:34.771 25618.618 - 25737.775: 98.9774% ( 2) 00:09:34.771 25737.775 - 25856.931: 98.9938% ( 2) 00:09:34.771 25856.931 - 25976.087: 99.0183% ( 3) 00:09:34.771 25976.087 - 26095.244: 99.0429% ( 3) 00:09:34.771 26095.244 - 26214.400: 99.0674% ( 3) 00:09:34.771 26214.400 - 26333.556: 99.0920% ( 3) 00:09:34.771 26333.556 - 26452.713: 99.1165% ( 3) 00:09:34.771 26452.713 - 26571.869: 99.1410% ( 3) 00:09:34.771 26571.869 - 26691.025: 99.1656% ( 3) 00:09:34.771 26691.025 - 26810.182: 99.1901% ( 3) 00:09:34.771 26810.182 - 26929.338: 99.2228% ( 4) 00:09:34.771 26929.338 - 27048.495: 99.2474% ( 3) 00:09:34.771 27048.495 - 27167.651: 99.2719% ( 3) 00:09:34.771 27167.651 - 27286.807: 99.2965% ( 3) 00:09:34.771 27286.807 - 27405.964: 99.3210% ( 3) 00:09:34.771 27405.964 - 27525.120: 99.3455% ( 3) 00:09:34.771 27525.120 - 27644.276: 99.3701% ( 3) 00:09:34.771 27644.276 - 27763.433: 99.3946% ( 3) 00:09:34.771 27763.433 - 27882.589: 99.4192% ( 3) 00:09:34.771 27882.589 - 28001.745: 99.4437% ( 3) 00:09:34.771 28001.745 - 28120.902: 99.4764% ( 4) 00:09:34.771 33125.469 - 33363.782: 99.5255% ( 6) 00:09:34.771 33363.782 - 33602.095: 99.5746% ( 6) 00:09:34.771 33602.095 - 33840.407: 99.6237% ( 6) 00:09:34.771 33840.407 - 34078.720: 99.6728% ( 6) 00:09:34.771 34078.720 - 34317.033: 99.7300% ( 7) 00:09:34.771 34317.033 - 34555.345: 99.7873% ( 7) 00:09:34.771 34555.345 - 34793.658: 99.8364% ( 6) 00:09:34.771 34793.658 - 35031.971: 99.8937% ( 7) 00:09:34.771 35031.971 - 35270.284: 99.9427% ( 6) 00:09:34.771 35270.284 - 35508.596: 100.0000% ( 7) 00:09:34.771 00:09:34.771 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:34.771 ============================================================================== 00:09:34.771 Range in us Cumulative IO count 00:09:34.771 8102.633 - 8162.211: 0.0082% ( 1) 00:09:34.771 8162.211 - 8221.789: 0.0654% ( 7) 00:09:34.771 8221.789 - 8281.367: 0.1554% ( 11) 00:09:34.771 8281.367 - 8340.945: 0.3109% ( 19) 00:09:34.771 8340.945 - 8400.524: 0.6135% ( 37) 00:09:34.771 8400.524 - 8460.102: 1.1371% ( 64) 00:09:34.771 8460.102 - 8519.680: 1.9224% ( 96) 00:09:34.771 8519.680 - 8579.258: 2.8223% ( 110) 00:09:34.771 8579.258 - 8638.836: 3.9185% ( 134) 00:09:34.771 8638.836 - 8698.415: 5.0965% ( 144) 00:09:34.771 8698.415 - 8757.993: 6.2991% ( 147) 00:09:34.771 8757.993 - 8817.571: 7.5753% ( 156) 00:09:34.771 8817.571 - 8877.149: 8.8269% ( 153) 00:09:34.771 8877.149 - 8936.727: 10.1358% ( 160) 00:09:34.771 8936.727 - 8996.305: 11.3711% ( 151) 00:09:34.771 8996.305 - 9055.884: 12.6718% ( 159) 00:09:34.771 9055.884 - 9115.462: 13.9562% ( 157) 00:09:34.771 9115.462 - 9175.040: 15.2323% ( 156) 00:09:34.771 9175.040 - 9234.618: 16.5249% ( 158) 00:09:34.771 9234.618 - 9294.196: 17.7847% ( 154) 00:09:34.771 9294.196 - 9353.775: 19.1590% ( 168) 00:09:34.771 9353.775 - 9413.353: 20.6724% ( 185) 00:09:34.771 9413.353 - 9472.931: 22.4149% ( 213) 00:09:34.771 9472.931 - 9532.509: 24.3128% ( 232) 00:09:34.771 9532.509 - 9592.087: 26.2925% ( 242) 00:09:34.771 9592.087 - 9651.665: 28.3704% ( 254) 00:09:34.771 9651.665 - 9711.244: 30.5465% ( 266) 00:09:34.771 9711.244 - 9770.822: 32.9352% ( 292) 00:09:34.771 9770.822 - 9830.400: 35.6675% ( 334) 00:09:34.771 9830.400 - 9889.978: 38.5717% ( 355) 00:09:34.771 9889.978 - 9949.556: 41.8685% ( 403) 00:09:34.771 9949.556 - 10009.135: 45.2225% ( 410) 00:09:34.771 10009.135 - 10068.713: 48.5848% ( 411) 00:09:34.771 10068.713 - 10128.291: 52.0452% ( 423) 00:09:34.771 10128.291 - 10187.869: 55.6119% ( 436) 00:09:34.771 10187.869 - 10247.447: 59.0151% ( 416) 00:09:34.771 10247.447 - 10307.025: 62.5900% ( 437) 00:09:34.771 10307.025 - 10366.604: 65.9849% ( 415) 00:09:34.771 10366.604 - 10426.182: 69.0281% ( 372) 00:09:34.771 10426.182 - 10485.760: 71.8995% ( 351) 00:09:34.771 10485.760 - 10545.338: 74.5173% ( 320) 00:09:34.771 10545.338 - 10604.916: 77.0124% ( 305) 00:09:34.771 10604.916 - 10664.495: 79.1967% ( 267) 00:09:34.771 10664.495 - 10724.073: 81.0291% ( 224) 00:09:34.771 10724.073 - 10783.651: 82.4280% ( 171) 00:09:34.771 10783.651 - 10843.229: 83.7205% ( 158) 00:09:34.771 10843.229 - 10902.807: 84.7840% ( 130) 00:09:34.771 10902.807 - 10962.385: 85.4876% ( 86) 00:09:34.771 10962.385 - 11021.964: 86.0193% ( 65) 00:09:34.771 11021.964 - 11081.542: 86.4856% ( 57) 00:09:34.771 11081.542 - 11141.120: 86.9274% ( 54) 00:09:34.771 11141.120 - 11200.698: 87.3527% ( 52) 00:09:34.771 11200.698 - 11260.276: 87.6636% ( 38) 00:09:34.771 11260.276 - 11319.855: 87.9418% ( 34) 00:09:34.771 11319.855 - 11379.433: 88.1708% ( 28) 00:09:34.771 11379.433 - 11439.011: 88.3999% ( 28) 00:09:34.771 11439.011 - 11498.589: 88.6289% ( 28) 00:09:34.771 11498.589 - 11558.167: 88.8825% ( 31) 00:09:34.771 11558.167 - 11617.745: 89.1770% ( 36) 00:09:34.771 11617.745 - 11677.324: 89.5043% ( 40) 00:09:34.771 11677.324 - 11736.902: 89.8397% ( 41) 00:09:34.771 11736.902 - 11796.480: 90.1914% ( 43) 00:09:34.771 11796.480 - 11856.058: 90.5268% ( 41) 00:09:34.771 11856.058 - 11915.636: 90.9359% ( 50) 00:09:34.771 11915.636 - 11975.215: 91.3858% ( 55) 00:09:34.771 11975.215 - 12034.793: 91.7866% ( 49) 00:09:34.771 12034.793 - 12094.371: 92.2202% ( 53) 00:09:34.771 12094.371 - 12153.949: 92.6783% ( 56) 00:09:34.771 12153.949 - 12213.527: 93.1446% ( 57) 00:09:34.771 12213.527 - 12273.105: 93.6355% ( 60) 00:09:34.771 12273.105 - 12332.684: 94.0854% ( 55) 00:09:34.771 12332.684 - 12392.262: 94.5435% ( 56) 00:09:34.771 12392.262 - 12451.840: 94.9771% ( 53) 00:09:34.771 12451.840 - 12511.418: 95.3125% ( 41) 00:09:34.771 12511.418 - 12570.996: 95.6070% ( 36) 00:09:34.771 12570.996 - 12630.575: 95.8606% ( 31) 00:09:34.771 12630.575 - 12690.153: 96.0651% ( 25) 00:09:34.771 12690.153 - 12749.731: 96.2533% ( 23) 00:09:34.771 12749.731 - 12809.309: 96.4169% ( 20) 00:09:34.771 12809.309 - 12868.887: 96.5723% ( 19) 00:09:34.771 12868.887 - 12928.465: 96.7359% ( 20) 00:09:34.771 12928.465 - 12988.044: 96.8668% ( 16) 00:09:34.771 12988.044 - 13047.622: 96.9813% ( 14) 00:09:34.771 13047.622 - 13107.200: 97.0223% ( 5) 00:09:34.771 13107.200 - 13166.778: 97.0550% ( 4) 00:09:34.771 13166.778 - 13226.356: 97.0877% ( 4) 00:09:34.771 13226.356 - 13285.935: 97.1204% ( 4) 00:09:34.771 13285.935 - 13345.513: 97.1613% ( 5) 00:09:34.771 13345.513 - 13405.091: 97.1940% ( 4) 00:09:34.771 13405.091 - 13464.669: 97.2186% ( 3) 00:09:34.771 13464.669 - 13524.247: 97.2513% ( 4) 00:09:34.771 13524.247 - 13583.825: 97.2922% ( 5) 00:09:34.771 13583.825 - 13643.404: 97.3249% ( 4) 00:09:34.771 13643.404 - 13702.982: 97.3495% ( 3) 00:09:34.771 13702.982 - 13762.560: 97.3740% ( 3) 00:09:34.771 13762.560 - 13822.138: 97.3822% ( 1) 00:09:34.771 13941.295 - 14000.873: 97.3904% ( 1) 00:09:34.771 14000.873 - 14060.451: 97.4149% ( 3) 00:09:34.771 14060.451 - 14120.029: 97.4476% ( 4) 00:09:34.771 14120.029 - 14179.607: 97.4804% ( 4) 00:09:34.771 14179.607 - 14239.185: 97.5131% ( 4) 00:09:34.771 14239.185 - 14298.764: 97.5458% ( 4) 00:09:34.771 14298.764 - 14358.342: 97.5949% ( 6) 00:09:34.771 14358.342 - 14417.920: 97.7012% ( 13) 00:09:34.771 14417.920 - 14477.498: 97.8076% ( 13) 00:09:34.771 14477.498 - 14537.076: 97.8894% ( 10) 00:09:34.771 14537.076 - 14596.655: 97.9957% ( 13) 00:09:34.771 14596.655 - 14656.233: 98.1021% ( 13) 00:09:34.771 14656.233 - 14715.811: 98.2084% ( 13) 00:09:34.771 14715.811 - 14775.389: 98.3066% ( 12) 00:09:34.771 14775.389 - 14834.967: 98.3966% ( 11) 00:09:34.771 14834.967 - 14894.545: 98.4866% ( 11) 00:09:34.771 14894.545 - 14954.124: 98.5766% ( 11) 00:09:34.771 14954.124 - 15013.702: 98.6502% ( 9) 00:09:34.771 15013.702 - 15073.280: 98.7156% ( 8) 00:09:34.771 15073.280 - 15132.858: 98.7811% ( 8) 00:09:34.771 15132.858 - 15192.436: 98.8465% ( 8) 00:09:34.771 15192.436 - 15252.015: 98.9120% ( 8) 00:09:34.771 15252.015 - 15371.171: 98.9529% ( 5) 00:09:34.771 23354.647 - 23473.804: 98.9692% ( 2) 00:09:34.771 23473.804 - 23592.960: 98.9938% ( 3) 00:09:34.771 23592.960 - 23712.116: 99.0183% ( 3) 00:09:34.771 23712.116 - 23831.273: 99.0510% ( 4) 00:09:34.771 23831.273 - 23950.429: 99.0756% ( 3) 00:09:34.771 23950.429 - 24069.585: 99.1001% ( 3) 00:09:34.771 24069.585 - 24188.742: 99.1247% ( 3) 00:09:34.771 24188.742 - 24307.898: 99.1492% ( 3) 00:09:34.771 24307.898 - 24427.055: 99.1656% ( 2) 00:09:34.771 24427.055 - 24546.211: 99.1983% ( 4) 00:09:34.771 24546.211 - 24665.367: 99.2228% ( 3) 00:09:34.771 24665.367 - 24784.524: 99.2474% ( 3) 00:09:34.771 24784.524 - 24903.680: 99.2719% ( 3) 00:09:34.771 24903.680 - 25022.836: 99.2965% ( 3) 00:09:34.771 25022.836 - 25141.993: 99.3210% ( 3) 00:09:34.771 25141.993 - 25261.149: 99.3455% ( 3) 00:09:34.771 25261.149 - 25380.305: 99.3783% ( 4) 00:09:34.771 25380.305 - 25499.462: 99.4028% ( 3) 00:09:34.771 25499.462 - 25618.618: 99.4274% ( 3) 00:09:34.771 25618.618 - 25737.775: 99.4437% ( 2) 00:09:34.771 25737.775 - 25856.931: 99.4683% ( 3) 00:09:34.771 25856.931 - 25976.087: 99.4764% ( 1) 00:09:34.772 30742.342 - 30980.655: 99.4846% ( 1) 00:09:34.772 30980.655 - 31218.967: 99.5337% ( 6) 00:09:34.772 31218.967 - 31457.280: 99.5828% ( 6) 00:09:34.772 31457.280 - 31695.593: 99.6155% ( 4) 00:09:34.772 31695.593 - 31933.905: 99.6728% ( 7) 00:09:34.772 31933.905 - 32172.218: 99.7219% ( 6) 00:09:34.772 32172.218 - 32410.531: 99.7791% ( 7) 00:09:34.772 32410.531 - 32648.844: 99.8364% ( 7) 00:09:34.772 32648.844 - 32887.156: 99.8937% ( 7) 00:09:34.772 32887.156 - 33125.469: 99.9509% ( 7) 00:09:34.772 33125.469 - 33363.782: 100.0000% ( 6) 00:09:34.772 00:09:34.772 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:34.772 ============================================================================== 00:09:34.772 Range in us Cumulative IO count 00:09:34.772 8162.211 - 8221.789: 0.0409% ( 5) 00:09:34.772 8221.789 - 8281.367: 0.1636% ( 15) 00:09:34.772 8281.367 - 8340.945: 0.3681% ( 25) 00:09:34.772 8340.945 - 8400.524: 0.6381% ( 33) 00:09:34.772 8400.524 - 8460.102: 1.1126% ( 58) 00:09:34.772 8460.102 - 8519.680: 1.7916% ( 83) 00:09:34.772 8519.680 - 8579.258: 2.6505% ( 105) 00:09:34.772 8579.258 - 8638.836: 3.8040% ( 141) 00:09:34.772 8638.836 - 8698.415: 4.8838% ( 132) 00:09:34.772 8698.415 - 8757.993: 6.1437% ( 154) 00:09:34.772 8757.993 - 8817.571: 7.4035% ( 154) 00:09:34.772 8817.571 - 8877.149: 8.7205% ( 161) 00:09:34.772 8877.149 - 8936.727: 9.9804% ( 154) 00:09:34.772 8936.727 - 8996.305: 11.3056% ( 162) 00:09:34.772 8996.305 - 9055.884: 12.6063% ( 159) 00:09:34.772 9055.884 - 9115.462: 13.9234% ( 161) 00:09:34.772 9115.462 - 9175.040: 15.2160% ( 158) 00:09:34.772 9175.040 - 9234.618: 16.5249% ( 160) 00:09:34.772 9234.618 - 9294.196: 17.7683% ( 152) 00:09:34.772 9294.196 - 9353.775: 19.0609% ( 158) 00:09:34.772 9353.775 - 9413.353: 20.4188% ( 166) 00:09:34.772 9413.353 - 9472.931: 22.0468% ( 199) 00:09:34.772 9472.931 - 9532.509: 23.8547% ( 221) 00:09:34.772 9532.509 - 9592.087: 25.8917% ( 249) 00:09:34.772 9592.087 - 9651.665: 28.0923% ( 269) 00:09:34.772 9651.665 - 9711.244: 30.3010% ( 270) 00:09:34.772 9711.244 - 9770.822: 32.6980% ( 293) 00:09:34.772 9770.822 - 9830.400: 35.4467% ( 336) 00:09:34.772 9830.400 - 9889.978: 38.4490% ( 367) 00:09:34.772 9889.978 - 9949.556: 41.7212% ( 400) 00:09:34.772 9949.556 - 10009.135: 45.0180% ( 403) 00:09:34.772 10009.135 - 10068.713: 48.5438% ( 431) 00:09:34.772 10068.713 - 10128.291: 52.1433% ( 440) 00:09:34.772 10128.291 - 10187.869: 55.6692% ( 431) 00:09:34.772 10187.869 - 10247.447: 59.2687% ( 440) 00:09:34.772 10247.447 - 10307.025: 62.6963% ( 419) 00:09:34.772 10307.025 - 10366.604: 66.2958% ( 440) 00:09:34.772 10366.604 - 10426.182: 69.4863% ( 390) 00:09:34.772 10426.182 - 10485.760: 72.4313% ( 360) 00:09:34.772 10485.760 - 10545.338: 75.0900% ( 325) 00:09:34.772 10545.338 - 10604.916: 77.5524% ( 301) 00:09:34.772 10604.916 - 10664.495: 79.5893% ( 249) 00:09:34.772 10664.495 - 10724.073: 81.4136% ( 223) 00:09:34.772 10724.073 - 10783.651: 82.9270% ( 185) 00:09:34.772 10783.651 - 10843.229: 84.0887% ( 142) 00:09:34.772 10843.229 - 10902.807: 84.9722% ( 108) 00:09:34.772 10902.807 - 10962.385: 85.6757% ( 86) 00:09:34.772 10962.385 - 11021.964: 86.2238% ( 67) 00:09:34.772 11021.964 - 11081.542: 86.6656% ( 54) 00:09:34.772 11081.542 - 11141.120: 87.0828% ( 51) 00:09:34.772 11141.120 - 11200.698: 87.4509% ( 45) 00:09:34.772 11200.698 - 11260.276: 87.7700% ( 39) 00:09:34.772 11260.276 - 11319.855: 88.0399% ( 33) 00:09:34.772 11319.855 - 11379.433: 88.3099% ( 33) 00:09:34.772 11379.433 - 11439.011: 88.4899% ( 22) 00:09:34.772 11439.011 - 11498.589: 88.6453% ( 19) 00:09:34.772 11498.589 - 11558.167: 88.8825% ( 29) 00:09:34.772 11558.167 - 11617.745: 89.1279% ( 30) 00:09:34.772 11617.745 - 11677.324: 89.4143% ( 35) 00:09:34.772 11677.324 - 11736.902: 89.8069% ( 48) 00:09:34.772 11736.902 - 11796.480: 90.1914% ( 47) 00:09:34.772 11796.480 - 11856.058: 90.5759% ( 47) 00:09:34.772 11856.058 - 11915.636: 90.9686% ( 48) 00:09:34.772 11915.636 - 11975.215: 91.3694% ( 49) 00:09:34.772 11975.215 - 12034.793: 91.8112% ( 54) 00:09:34.772 12034.793 - 12094.371: 92.2693% ( 56) 00:09:34.772 12094.371 - 12153.949: 92.7520% ( 59) 00:09:34.772 12153.949 - 12213.527: 93.2428% ( 60) 00:09:34.772 12213.527 - 12273.105: 93.7009% ( 56) 00:09:34.772 12273.105 - 12332.684: 94.1345% ( 53) 00:09:34.772 12332.684 - 12392.262: 94.6090% ( 58) 00:09:34.772 12392.262 - 12451.840: 95.0834% ( 58) 00:09:34.772 12451.840 - 12511.418: 95.4598% ( 46) 00:09:34.772 12511.418 - 12570.996: 95.8033% ( 42) 00:09:34.772 12570.996 - 12630.575: 96.0897% ( 35) 00:09:34.772 12630.575 - 12690.153: 96.2860% ( 24) 00:09:34.772 12690.153 - 12749.731: 96.4578% ( 21) 00:09:34.772 12749.731 - 12809.309: 96.6296% ( 21) 00:09:34.772 12809.309 - 12868.887: 96.7850% ( 19) 00:09:34.772 12868.887 - 12928.465: 96.8914% ( 13) 00:09:34.772 12928.465 - 12988.044: 96.9977% ( 13) 00:09:34.772 12988.044 - 13047.622: 97.0877% ( 11) 00:09:34.772 13047.622 - 13107.200: 97.1695% ( 10) 00:09:34.772 13107.200 - 13166.778: 97.2349% ( 8) 00:09:34.772 13166.778 - 13226.356: 97.3004% ( 8) 00:09:34.772 13226.356 - 13285.935: 97.3413% ( 5) 00:09:34.772 13285.935 - 13345.513: 97.3740% ( 4) 00:09:34.772 13345.513 - 13405.091: 97.3822% ( 1) 00:09:34.772 13881.716 - 13941.295: 97.4067% ( 3) 00:09:34.772 13941.295 - 14000.873: 97.4313% ( 3) 00:09:34.772 14000.873 - 14060.451: 97.4558% ( 3) 00:09:34.772 14060.451 - 14120.029: 97.4885% ( 4) 00:09:34.772 14120.029 - 14179.607: 97.5295% ( 5) 00:09:34.772 14179.607 - 14239.185: 97.5540% ( 3) 00:09:34.772 14239.185 - 14298.764: 97.6031% ( 6) 00:09:34.773 14298.764 - 14358.342: 97.6522% ( 6) 00:09:34.773 14358.342 - 14417.920: 97.7258% ( 9) 00:09:34.773 14417.920 - 14477.498: 97.7912% ( 8) 00:09:34.773 14477.498 - 14537.076: 97.8567% ( 8) 00:09:34.773 14537.076 - 14596.655: 97.9139% ( 7) 00:09:34.773 14596.655 - 14656.233: 97.9794% ( 8) 00:09:34.773 14656.233 - 14715.811: 98.0448% ( 8) 00:09:34.773 14715.811 - 14775.389: 98.1185% ( 9) 00:09:34.773 14775.389 - 14834.967: 98.1839% ( 8) 00:09:34.773 14834.967 - 14894.545: 98.2330% ( 6) 00:09:34.773 14894.545 - 14954.124: 98.2657% ( 4) 00:09:34.773 14954.124 - 15013.702: 98.3066% ( 5) 00:09:34.773 15013.702 - 15073.280: 98.3721% ( 8) 00:09:34.773 15073.280 - 15132.858: 98.4293% ( 7) 00:09:34.773 15132.858 - 15192.436: 98.4948% ( 8) 00:09:34.773 15192.436 - 15252.015: 98.5438% ( 6) 00:09:34.773 15252.015 - 15371.171: 98.6175% ( 9) 00:09:34.773 15371.171 - 15490.327: 98.6829% ( 8) 00:09:34.773 15490.327 - 15609.484: 98.7565% ( 9) 00:09:34.773 15609.484 - 15728.640: 98.8138% ( 7) 00:09:34.773 15728.640 - 15847.796: 98.8874% ( 9) 00:09:34.773 15847.796 - 15966.953: 98.9529% ( 8) 00:09:34.773 20852.364 - 20971.520: 98.9692% ( 2) 00:09:34.773 20971.520 - 21090.676: 98.9856% ( 2) 00:09:34.773 21090.676 - 21209.833: 99.0101% ( 3) 00:09:34.773 21209.833 - 21328.989: 99.0510% ( 5) 00:09:34.773 21328.989 - 21448.145: 99.0674% ( 2) 00:09:34.773 21448.145 - 21567.302: 99.0920% ( 3) 00:09:34.773 21567.302 - 21686.458: 99.1165% ( 3) 00:09:34.773 21686.458 - 21805.615: 99.1410% ( 3) 00:09:34.773 21805.615 - 21924.771: 99.1656% ( 3) 00:09:34.773 21924.771 - 22043.927: 99.1901% ( 3) 00:09:34.773 22043.927 - 22163.084: 99.2147% ( 3) 00:09:34.773 22163.084 - 22282.240: 99.2310% ( 2) 00:09:34.773 22282.240 - 22401.396: 99.2556% ( 3) 00:09:34.773 22401.396 - 22520.553: 99.2801% ( 3) 00:09:34.773 22520.553 - 22639.709: 99.3128% ( 4) 00:09:34.773 22639.709 - 22758.865: 99.3374% ( 3) 00:09:34.773 22758.865 - 22878.022: 99.3619% ( 3) 00:09:34.773 22878.022 - 22997.178: 99.3865% ( 3) 00:09:34.773 22997.178 - 23116.335: 99.4110% ( 3) 00:09:34.773 23116.335 - 23235.491: 99.4437% ( 4) 00:09:34.773 23235.491 - 23354.647: 99.4683% ( 3) 00:09:34.773 23354.647 - 23473.804: 99.4764% ( 1) 00:09:34.773 28240.058 - 28359.215: 99.5010% ( 3) 00:09:34.773 28359.215 - 28478.371: 99.5255% ( 3) 00:09:34.773 28478.371 - 28597.527: 99.5582% ( 4) 00:09:34.773 28597.527 - 28716.684: 99.5828% ( 3) 00:09:34.773 28716.684 - 28835.840: 99.6073% ( 3) 00:09:34.773 28835.840 - 28954.996: 99.6401% ( 4) 00:09:34.773 28954.996 - 29074.153: 99.6646% ( 3) 00:09:34.773 29074.153 - 29193.309: 99.6973% ( 4) 00:09:34.773 29193.309 - 29312.465: 99.7219% ( 3) 00:09:34.773 29312.465 - 29431.622: 99.7464% ( 3) 00:09:34.773 29431.622 - 29550.778: 99.7791% ( 4) 00:09:34.773 29550.778 - 29669.935: 99.8118% ( 4) 00:09:34.773 29669.935 - 29789.091: 99.8364% ( 3) 00:09:34.773 29789.091 - 29908.247: 99.8609% ( 3) 00:09:34.773 29908.247 - 30027.404: 99.8855% ( 3) 00:09:34.773 30027.404 - 30146.560: 99.9100% ( 3) 00:09:34.773 30146.560 - 30265.716: 99.9346% ( 3) 00:09:34.773 30265.716 - 30384.873: 99.9591% ( 3) 00:09:34.773 30384.873 - 30504.029: 99.9918% ( 4) 00:09:34.773 30504.029 - 30742.342: 100.0000% ( 1) 00:09:34.773 00:09:34.773 15:17:48 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:36.148 Initializing NVMe Controllers 00:09:36.148 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:36.148 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:36.148 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:36.148 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:36.148 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:36.148 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:36.148 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:36.148 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:36.148 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:36.148 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:36.148 Initialization complete. Launching workers. 00:09:36.148 ======================================================== 00:09:36.148 Latency(us) 00:09:36.149 Device Information : IOPS MiB/s Average min max 00:09:36.149 PCIE (0000:00:11.0) NSID 1 from core 0: 12541.30 146.97 10233.22 8335.41 35163.17 00:09:36.149 PCIE (0000:00:13.0) NSID 1 from core 0: 12541.30 146.97 10219.24 8249.45 33542.99 00:09:36.149 PCIE (0000:00:10.0) NSID 1 from core 0: 12541.30 146.97 10203.09 8136.90 32021.70 00:09:36.149 PCIE (0000:00:12.0) NSID 1 from core 0: 12541.30 146.97 10188.85 8316.05 30151.79 00:09:36.149 PCIE (0000:00:12.0) NSID 2 from core 0: 12541.30 146.97 10174.66 8194.43 29024.36 00:09:36.149 PCIE (0000:00:12.0) NSID 3 from core 0: 12541.30 146.97 10160.53 8262.81 27261.15 00:09:36.149 ======================================================== 00:09:36.149 Total : 75247.82 881.81 10196.60 8136.90 35163.17 00:09:36.149 00:09:36.149 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:36.149 ================================================================================= 00:09:36.149 1.00000% : 8638.836us 00:09:36.149 10.00000% : 8996.305us 00:09:36.149 25.00000% : 9413.353us 00:09:36.149 50.00000% : 9949.556us 00:09:36.149 75.00000% : 10485.760us 00:09:36.149 90.00000% : 11141.120us 00:09:36.149 95.00000% : 12094.371us 00:09:36.149 98.00000% : 13524.247us 00:09:36.149 99.00000% : 25380.305us 00:09:36.149 99.50000% : 33602.095us 00:09:36.149 99.90000% : 34793.658us 00:09:36.149 99.99000% : 35270.284us 00:09:36.149 99.99900% : 35270.284us 00:09:36.149 99.99990% : 35270.284us 00:09:36.149 99.99999% : 35270.284us 00:09:36.149 00:09:36.149 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:36.149 ================================================================================= 00:09:36.149 1.00000% : 8638.836us 00:09:36.149 10.00000% : 8996.305us 00:09:36.149 25.00000% : 9413.353us 00:09:36.149 50.00000% : 9949.556us 00:09:36.149 75.00000% : 10485.760us 00:09:36.149 90.00000% : 11141.120us 00:09:36.149 95.00000% : 12153.949us 00:09:36.149 98.00000% : 13405.091us 00:09:36.149 99.00000% : 24903.680us 00:09:36.149 99.50000% : 31933.905us 00:09:36.149 99.90000% : 33363.782us 00:09:36.149 99.99000% : 33602.095us 00:09:36.149 99.99900% : 33602.095us 00:09:36.149 99.99990% : 33602.095us 00:09:36.149 99.99999% : 33602.095us 00:09:36.149 00:09:36.149 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:36.149 ================================================================================= 00:09:36.149 1.00000% : 8460.102us 00:09:36.149 10.00000% : 8936.727us 00:09:36.149 25.00000% : 9413.353us 00:09:36.149 50.00000% : 9949.556us 00:09:36.149 75.00000% : 10545.338us 00:09:36.149 90.00000% : 11260.276us 00:09:36.149 95.00000% : 12153.949us 00:09:36.149 98.00000% : 13524.247us 00:09:36.149 99.00000% : 23116.335us 00:09:36.149 99.50000% : 29908.247us 00:09:36.149 99.90000% : 31695.593us 00:09:36.149 99.99000% : 32172.218us 00:09:36.149 99.99900% : 32172.218us 00:09:36.149 99.99990% : 32172.218us 00:09:36.149 99.99999% : 32172.218us 00:09:36.149 00:09:36.149 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:36.149 ================================================================================= 00:09:36.149 1.00000% : 8638.836us 00:09:36.149 10.00000% : 8996.305us 00:09:36.149 25.00000% : 9353.775us 00:09:36.149 50.00000% : 9949.556us 00:09:36.149 75.00000% : 10485.760us 00:09:36.149 90.00000% : 11200.698us 00:09:36.149 95.00000% : 12213.527us 00:09:36.149 98.00000% : 13405.091us 00:09:36.149 99.00000% : 21924.771us 00:09:36.149 99.50000% : 28359.215us 00:09:36.149 99.90000% : 29908.247us 00:09:36.149 99.99000% : 30146.560us 00:09:36.149 99.99900% : 30265.716us 00:09:36.149 99.99990% : 30265.716us 00:09:36.149 99.99999% : 30265.716us 00:09:36.149 00:09:36.149 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:36.149 ================================================================================= 00:09:36.149 1.00000% : 8579.258us 00:09:36.149 10.00000% : 8996.305us 00:09:36.149 25.00000% : 9413.353us 00:09:36.149 50.00000% : 9949.556us 00:09:36.149 75.00000% : 10485.760us 00:09:36.149 90.00000% : 11141.120us 00:09:36.149 95.00000% : 12094.371us 00:09:36.149 98.00000% : 13762.560us 00:09:36.149 99.00000% : 20733.207us 00:09:36.149 99.50000% : 27167.651us 00:09:36.149 99.90000% : 28716.684us 00:09:36.149 99.99000% : 29074.153us 00:09:36.149 99.99900% : 29074.153us 00:09:36.149 99.99990% : 29074.153us 00:09:36.149 99.99999% : 29074.153us 00:09:36.149 00:09:36.149 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:36.149 ================================================================================= 00:09:36.149 1.00000% : 8638.836us 00:09:36.149 10.00000% : 8996.305us 00:09:36.149 25.00000% : 9413.353us 00:09:36.149 50.00000% : 9949.556us 00:09:36.149 75.00000% : 10485.760us 00:09:36.149 90.00000% : 11141.120us 00:09:36.149 95.00000% : 12094.371us 00:09:36.149 98.00000% : 13881.716us 00:09:36.149 99.00000% : 19184.175us 00:09:36.149 99.50000% : 25499.462us 00:09:36.149 99.90000% : 26929.338us 00:09:36.149 99.99000% : 27286.807us 00:09:36.149 99.99900% : 27286.807us 00:09:36.149 99.99990% : 27286.807us 00:09:36.149 99.99999% : 27286.807us 00:09:36.149 00:09:36.149 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:36.149 ============================================================================== 00:09:36.149 Range in us Cumulative IO count 00:09:36.149 8281.367 - 8340.945: 0.0080% ( 1) 00:09:36.149 8340.945 - 8400.524: 0.1196% ( 14) 00:09:36.149 8400.524 - 8460.102: 0.2073% ( 11) 00:09:36.149 8460.102 - 8519.680: 0.3986% ( 24) 00:09:36.149 8519.680 - 8579.258: 0.7573% ( 45) 00:09:36.149 8579.258 - 8638.836: 1.4349% ( 85) 00:09:36.149 8638.836 - 8698.415: 2.3916% ( 120) 00:09:36.149 8698.415 - 8757.993: 3.4120% ( 128) 00:09:36.149 8757.993 - 8817.571: 4.8709% ( 183) 00:09:36.149 8817.571 - 8877.149: 6.4094% ( 193) 00:09:36.149 8877.149 - 8936.727: 8.3068% ( 238) 00:09:36.149 8936.727 - 8996.305: 10.3954% ( 262) 00:09:36.149 8996.305 - 9055.884: 12.5159% ( 266) 00:09:36.149 9055.884 - 9115.462: 14.8916% ( 298) 00:09:36.149 9115.462 - 9175.040: 17.1716% ( 286) 00:09:36.149 9175.040 - 9234.618: 19.5791% ( 302) 00:09:36.149 9234.618 - 9294.196: 21.9308% ( 295) 00:09:36.149 9294.196 - 9353.775: 24.4978% ( 322) 00:09:36.149 9353.775 - 9413.353: 26.7857% ( 287) 00:09:36.149 9413.353 - 9472.931: 29.3686% ( 324) 00:09:36.149 9472.931 - 9532.509: 32.0950% ( 342) 00:09:36.149 9532.509 - 9592.087: 34.6142% ( 316) 00:09:36.149 9592.087 - 9651.665: 37.2290% ( 328) 00:09:36.149 9651.665 - 9711.244: 39.8358% ( 327) 00:09:36.149 9711.244 - 9770.822: 42.4585% ( 329) 00:09:36.149 9770.822 - 9830.400: 45.3444% ( 362) 00:09:36.149 9830.400 - 9889.978: 48.4216% ( 386) 00:09:36.149 9889.978 - 9949.556: 51.2596% ( 356) 00:09:36.149 9949.556 - 10009.135: 54.3766% ( 391) 00:09:36.149 10009.135 - 10068.713: 57.6531% ( 411) 00:09:36.149 10068.713 - 10128.291: 60.7462% ( 388) 00:09:36.149 10128.291 - 10187.869: 63.9270% ( 399) 00:09:36.149 10187.869 - 10247.447: 66.7411% ( 353) 00:09:36.149 10247.447 - 10307.025: 69.5153% ( 348) 00:09:36.149 10307.025 - 10366.604: 72.0982% ( 324) 00:09:36.149 10366.604 - 10426.182: 74.4260% ( 292) 00:09:36.149 10426.182 - 10485.760: 76.8017% ( 298) 00:09:36.149 10485.760 - 10545.338: 78.6671% ( 234) 00:09:36.149 10545.338 - 10604.916: 80.5166% ( 232) 00:09:36.149 10604.916 - 10664.495: 82.3740% ( 233) 00:09:36.149 10664.495 - 10724.073: 83.8329% ( 183) 00:09:36.149 10724.073 - 10783.651: 85.2041% ( 172) 00:09:36.149 10783.651 - 10843.229: 86.3999% ( 150) 00:09:36.149 10843.229 - 10902.807: 87.4522% ( 132) 00:09:36.149 10902.807 - 10962.385: 88.3131% ( 108) 00:09:36.149 10962.385 - 11021.964: 89.0545% ( 93) 00:09:36.149 11021.964 - 11081.542: 89.7162% ( 83) 00:09:36.149 11081.542 - 11141.120: 90.2025% ( 61) 00:09:36.149 11141.120 - 11200.698: 90.5533% ( 44) 00:09:36.149 11200.698 - 11260.276: 90.8402% ( 36) 00:09:36.149 11260.276 - 11319.855: 91.1591% ( 40) 00:09:36.149 11319.855 - 11379.433: 91.4860% ( 41) 00:09:36.149 11379.433 - 11439.011: 91.9483% ( 58) 00:09:36.149 11439.011 - 11498.589: 92.3151% ( 46) 00:09:36.149 11498.589 - 11558.167: 92.7535% ( 55) 00:09:36.149 11558.167 - 11617.745: 93.1680% ( 52) 00:09:36.149 11617.745 - 11677.324: 93.4630% ( 37) 00:09:36.149 11677.324 - 11736.902: 93.7580% ( 37) 00:09:36.149 11736.902 - 11796.480: 94.0450% ( 36) 00:09:36.149 11796.480 - 11856.058: 94.2682% ( 28) 00:09:36.149 11856.058 - 11915.636: 94.4356% ( 21) 00:09:36.149 11915.636 - 11975.215: 94.6269% ( 24) 00:09:36.149 11975.215 - 12034.793: 94.8262% ( 25) 00:09:36.149 12034.793 - 12094.371: 95.0096% ( 23) 00:09:36.149 12094.371 - 12153.949: 95.2089% ( 25) 00:09:36.149 12153.949 - 12213.527: 95.4161% ( 26) 00:09:36.149 12213.527 - 12273.105: 95.6154% ( 25) 00:09:36.149 12273.105 - 12332.684: 95.7430% ( 16) 00:09:36.149 12332.684 - 12392.262: 95.8546% ( 14) 00:09:36.149 12392.262 - 12451.840: 96.0539% ( 25) 00:09:36.149 12451.840 - 12511.418: 96.1814% ( 16) 00:09:36.149 12511.418 - 12570.996: 96.2771% ( 12) 00:09:36.149 12570.996 - 12630.575: 96.3807% ( 13) 00:09:36.149 12630.575 - 12690.153: 96.4923% ( 14) 00:09:36.149 12690.153 - 12749.731: 96.6040% ( 14) 00:09:36.149 12749.731 - 12809.309: 96.6996% ( 12) 00:09:36.149 12809.309 - 12868.887: 96.8033% ( 13) 00:09:36.149 12868.887 - 12928.465: 96.8909% ( 11) 00:09:36.149 12928.465 - 12988.044: 97.0026% ( 14) 00:09:36.149 12988.044 - 13047.622: 97.1540% ( 19) 00:09:36.149 13047.622 - 13107.200: 97.2816% ( 16) 00:09:36.149 13107.200 - 13166.778: 97.4011% ( 15) 00:09:36.149 13166.778 - 13226.356: 97.5207% ( 15) 00:09:36.149 13226.356 - 13285.935: 97.6802% ( 20) 00:09:36.149 13285.935 - 13345.513: 97.8077% ( 16) 00:09:36.149 13345.513 - 13405.091: 97.9034% ( 12) 00:09:36.149 13405.091 - 13464.669: 97.9990% ( 12) 00:09:36.149 13464.669 - 13524.247: 98.1425% ( 18) 00:09:36.149 13524.247 - 13583.825: 98.2223% ( 10) 00:09:36.149 13583.825 - 13643.404: 98.2701% ( 6) 00:09:36.149 13643.404 - 13702.982: 98.3020% ( 4) 00:09:36.149 13702.982 - 13762.560: 98.3339% ( 4) 00:09:36.149 13762.560 - 13822.138: 98.3658% ( 4) 00:09:36.149 13822.138 - 13881.716: 98.3897% ( 3) 00:09:36.149 13881.716 - 13941.295: 98.4375% ( 6) 00:09:36.149 13941.295 - 14000.873: 98.4694% ( 4) 00:09:36.150 15609.484 - 15728.640: 98.5332% ( 8) 00:09:36.150 15728.640 - 15847.796: 98.5890% ( 7) 00:09:36.150 15847.796 - 15966.953: 98.6288% ( 5) 00:09:36.150 15966.953 - 16086.109: 98.6687% ( 5) 00:09:36.150 16086.109 - 16205.265: 98.7325% ( 8) 00:09:36.150 16205.265 - 16324.422: 98.7723% ( 5) 00:09:36.150 16324.422 - 16443.578: 98.8441% ( 9) 00:09:36.150 16443.578 - 16562.735: 98.8760% ( 4) 00:09:36.150 16562.735 - 16681.891: 98.9078% ( 4) 00:09:36.150 16681.891 - 16801.047: 98.9397% ( 4) 00:09:36.150 16801.047 - 16920.204: 98.9716% ( 4) 00:09:36.150 16920.204 - 17039.360: 98.9796% ( 1) 00:09:36.150 25141.993 - 25261.149: 98.9876% ( 1) 00:09:36.150 25261.149 - 25380.305: 99.0035% ( 2) 00:09:36.150 25380.305 - 25499.462: 99.0274% ( 3) 00:09:36.150 25499.462 - 25618.618: 99.0513% ( 3) 00:09:36.150 25618.618 - 25737.775: 99.0753% ( 3) 00:09:36.150 25737.775 - 25856.931: 99.0992% ( 3) 00:09:36.150 25856.931 - 25976.087: 99.1231% ( 3) 00:09:36.150 25976.087 - 26095.244: 99.1390% ( 2) 00:09:36.150 26095.244 - 26214.400: 99.1629% ( 3) 00:09:36.150 26214.400 - 26333.556: 99.1869% ( 3) 00:09:36.150 26333.556 - 26452.713: 99.2108% ( 3) 00:09:36.150 26452.713 - 26571.869: 99.2347% ( 3) 00:09:36.150 26571.869 - 26691.025: 99.2586% ( 3) 00:09:36.150 26691.025 - 26810.182: 99.2746% ( 2) 00:09:36.150 26810.182 - 26929.338: 99.3064% ( 4) 00:09:36.150 26929.338 - 27048.495: 99.3383% ( 4) 00:09:36.150 27048.495 - 27167.651: 99.3702% ( 4) 00:09:36.150 27167.651 - 27286.807: 99.4021% ( 4) 00:09:36.150 27286.807 - 27405.964: 99.4340% ( 4) 00:09:36.150 27405.964 - 27525.120: 99.4659% ( 4) 00:09:36.150 27525.120 - 27644.276: 99.4898% ( 3) 00:09:36.150 33125.469 - 33363.782: 99.4978% ( 1) 00:09:36.150 33363.782 - 33602.095: 99.5615% ( 8) 00:09:36.150 33602.095 - 33840.407: 99.6333% ( 9) 00:09:36.150 33840.407 - 34078.720: 99.7050% ( 9) 00:09:36.150 34078.720 - 34317.033: 99.7688% ( 8) 00:09:36.150 34317.033 - 34555.345: 99.8326% ( 8) 00:09:36.150 34555.345 - 34793.658: 99.9043% ( 9) 00:09:36.150 34793.658 - 35031.971: 99.9601% ( 7) 00:09:36.150 35031.971 - 35270.284: 100.0000% ( 5) 00:09:36.150 00:09:36.150 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:36.150 ============================================================================== 00:09:36.150 Range in us Cumulative IO count 00:09:36.150 8221.789 - 8281.367: 0.0319% ( 4) 00:09:36.150 8281.367 - 8340.945: 0.1036% ( 9) 00:09:36.150 8340.945 - 8400.524: 0.1594% ( 7) 00:09:36.150 8400.524 - 8460.102: 0.2551% ( 12) 00:09:36.150 8460.102 - 8519.680: 0.4783% ( 28) 00:09:36.150 8519.680 - 8579.258: 0.8211% ( 43) 00:09:36.150 8579.258 - 8638.836: 1.4031% ( 73) 00:09:36.150 8638.836 - 8698.415: 2.2720% ( 109) 00:09:36.150 8698.415 - 8757.993: 3.4758% ( 151) 00:09:36.150 8757.993 - 8817.571: 4.8469% ( 172) 00:09:36.150 8817.571 - 8877.149: 6.4573% ( 202) 00:09:36.150 8877.149 - 8936.727: 8.4582% ( 251) 00:09:36.150 8936.727 - 8996.305: 10.3157% ( 233) 00:09:36.150 8996.305 - 9055.884: 12.1971% ( 236) 00:09:36.150 9055.884 - 9115.462: 14.3575% ( 271) 00:09:36.150 9115.462 - 9175.040: 16.7490% ( 300) 00:09:36.150 9175.040 - 9234.618: 19.3878% ( 331) 00:09:36.150 9234.618 - 9294.196: 21.8750% ( 312) 00:09:36.150 9294.196 - 9353.775: 24.5057% ( 330) 00:09:36.150 9353.775 - 9413.353: 27.1445% ( 331) 00:09:36.150 9413.353 - 9472.931: 29.8469% ( 339) 00:09:36.150 9472.931 - 9532.509: 32.5733% ( 342) 00:09:36.150 9532.509 - 9592.087: 35.0367% ( 309) 00:09:36.150 9592.087 - 9651.665: 37.6435% ( 327) 00:09:36.150 9651.665 - 9711.244: 40.2344% ( 325) 00:09:36.150 9711.244 - 9770.822: 42.8332% ( 326) 00:09:36.150 9770.822 - 9830.400: 45.5676% ( 343) 00:09:36.150 9830.400 - 9889.978: 48.4614% ( 363) 00:09:36.150 9889.978 - 9949.556: 51.5545% ( 388) 00:09:36.150 9949.556 - 10009.135: 54.8788% ( 417) 00:09:36.150 10009.135 - 10068.713: 58.2191% ( 419) 00:09:36.150 10068.713 - 10128.291: 61.4716% ( 408) 00:09:36.150 10128.291 - 10187.869: 64.6843% ( 403) 00:09:36.150 10187.869 - 10247.447: 67.5622% ( 361) 00:09:36.150 10247.447 - 10307.025: 69.9617% ( 301) 00:09:36.150 10307.025 - 10366.604: 72.2895% ( 292) 00:09:36.150 10366.604 - 10426.182: 74.4101% ( 266) 00:09:36.150 10426.182 - 10485.760: 76.6582% ( 282) 00:09:36.150 10485.760 - 10545.338: 78.8345% ( 273) 00:09:36.150 10545.338 - 10604.916: 80.8195% ( 249) 00:09:36.150 10604.916 - 10664.495: 82.5255% ( 214) 00:09:36.150 10664.495 - 10724.073: 84.0163% ( 187) 00:09:36.150 10724.073 - 10783.651: 85.3316% ( 165) 00:09:36.150 10783.651 - 10843.229: 86.5354% ( 151) 00:09:36.150 10843.229 - 10902.807: 87.5478% ( 127) 00:09:36.150 10902.807 - 10962.385: 88.3131% ( 96) 00:09:36.150 10962.385 - 11021.964: 89.0306% ( 90) 00:09:36.150 11021.964 - 11081.542: 89.6684% ( 80) 00:09:36.150 11081.542 - 11141.120: 90.1626% ( 62) 00:09:36.150 11141.120 - 11200.698: 90.5054% ( 43) 00:09:36.150 11200.698 - 11260.276: 90.8881% ( 48) 00:09:36.150 11260.276 - 11319.855: 91.1910% ( 38) 00:09:36.150 11319.855 - 11379.433: 91.5258% ( 42) 00:09:36.150 11379.433 - 11439.011: 91.8128% ( 36) 00:09:36.150 11439.011 - 11498.589: 92.1078% ( 37) 00:09:36.150 11498.589 - 11558.167: 92.4426% ( 42) 00:09:36.150 11558.167 - 11617.745: 92.7934% ( 44) 00:09:36.150 11617.745 - 11677.324: 93.0724% ( 35) 00:09:36.150 11677.324 - 11736.902: 93.4232% ( 44) 00:09:36.150 11736.902 - 11796.480: 93.6384% ( 27) 00:09:36.150 11796.480 - 11856.058: 93.9573% ( 40) 00:09:36.150 11856.058 - 11915.636: 94.2522% ( 37) 00:09:36.150 11915.636 - 11975.215: 94.5153% ( 33) 00:09:36.150 11975.215 - 12034.793: 94.6827% ( 21) 00:09:36.150 12034.793 - 12094.371: 94.9059% ( 28) 00:09:36.150 12094.371 - 12153.949: 95.1770% ( 34) 00:09:36.150 12153.949 - 12213.527: 95.3683% ( 24) 00:09:36.150 12213.527 - 12273.105: 95.5357% ( 21) 00:09:36.150 12273.105 - 12332.684: 95.6633% ( 16) 00:09:36.150 12332.684 - 12392.262: 95.7908% ( 16) 00:09:36.150 12392.262 - 12451.840: 95.9024% ( 14) 00:09:36.150 12451.840 - 12511.418: 96.0379% ( 17) 00:09:36.150 12511.418 - 12570.996: 96.1655% ( 16) 00:09:36.150 12570.996 - 12630.575: 96.2532% ( 11) 00:09:36.150 12630.575 - 12690.153: 96.3967% ( 18) 00:09:36.150 12690.153 - 12749.731: 96.5561% ( 20) 00:09:36.150 12749.731 - 12809.309: 96.7793% ( 28) 00:09:36.150 12809.309 - 12868.887: 96.9946% ( 27) 00:09:36.150 12868.887 - 12928.465: 97.2816% ( 36) 00:09:36.150 12928.465 - 12988.044: 97.4091% ( 16) 00:09:36.150 12988.044 - 13047.622: 97.5207% ( 14) 00:09:36.150 13047.622 - 13107.200: 97.6164% ( 12) 00:09:36.150 13107.200 - 13166.778: 97.7121% ( 12) 00:09:36.150 13166.778 - 13226.356: 97.8396% ( 16) 00:09:36.150 13226.356 - 13285.935: 97.9193% ( 10) 00:09:36.150 13285.935 - 13345.513: 97.9831% ( 8) 00:09:36.150 13345.513 - 13405.091: 98.0309% ( 6) 00:09:36.150 13405.091 - 13464.669: 98.1027% ( 9) 00:09:36.150 13464.669 - 13524.247: 98.1585% ( 7) 00:09:36.150 13524.247 - 13583.825: 98.2063% ( 6) 00:09:36.150 13583.825 - 13643.404: 98.2701% ( 8) 00:09:36.150 13643.404 - 13702.982: 98.3339% ( 8) 00:09:36.150 13702.982 - 13762.560: 98.3976% ( 8) 00:09:36.150 13762.560 - 13822.138: 98.4375% ( 5) 00:09:36.150 13822.138 - 13881.716: 98.4614% ( 3) 00:09:36.150 13881.716 - 13941.295: 98.4694% ( 1) 00:09:36.150 15847.796 - 15966.953: 98.5172% ( 6) 00:09:36.150 15966.953 - 16086.109: 98.5810% ( 8) 00:09:36.150 16086.109 - 16205.265: 98.6448% ( 8) 00:09:36.150 16205.265 - 16324.422: 98.6926% ( 6) 00:09:36.150 16324.422 - 16443.578: 98.7165% ( 3) 00:09:36.150 16443.578 - 16562.735: 98.7404% ( 3) 00:09:36.150 16562.735 - 16681.891: 98.7723% ( 4) 00:09:36.150 16681.891 - 16801.047: 98.8122% ( 5) 00:09:36.150 16801.047 - 16920.204: 98.8520% ( 5) 00:09:36.150 16920.204 - 17039.360: 98.8919% ( 5) 00:09:36.150 17039.360 - 17158.516: 98.9318% ( 5) 00:09:36.150 17158.516 - 17277.673: 98.9796% ( 6) 00:09:36.150 24784.524 - 24903.680: 99.0035% ( 3) 00:09:36.150 24903.680 - 25022.836: 99.0274% ( 3) 00:09:36.150 25022.836 - 25141.993: 99.0593% ( 4) 00:09:36.150 25141.993 - 25261.149: 99.0912% ( 4) 00:09:36.150 25261.149 - 25380.305: 99.1231% ( 4) 00:09:36.150 25380.305 - 25499.462: 99.1311% ( 1) 00:09:36.150 25499.462 - 25618.618: 99.1709% ( 5) 00:09:36.150 25618.618 - 25737.775: 99.2028% ( 4) 00:09:36.150 25737.775 - 25856.931: 99.2188% ( 2) 00:09:36.150 25856.931 - 25976.087: 99.2586% ( 5) 00:09:36.150 25976.087 - 26095.244: 99.2905% ( 4) 00:09:36.150 26095.244 - 26214.400: 99.3144% ( 3) 00:09:36.150 26214.400 - 26333.556: 99.3463% ( 4) 00:09:36.150 26333.556 - 26452.713: 99.3782% ( 4) 00:09:36.150 26452.713 - 26571.869: 99.4101% ( 4) 00:09:36.150 26571.869 - 26691.025: 99.4499% ( 5) 00:09:36.150 26691.025 - 26810.182: 99.4818% ( 4) 00:09:36.150 26810.182 - 26929.338: 99.4898% ( 1) 00:09:36.150 31457.280 - 31695.593: 99.4978% ( 1) 00:09:36.150 31695.593 - 31933.905: 99.5615% ( 8) 00:09:36.150 31933.905 - 32172.218: 99.6173% ( 7) 00:09:36.150 32172.218 - 32410.531: 99.6811% ( 8) 00:09:36.150 32410.531 - 32648.844: 99.7529% ( 9) 00:09:36.150 32648.844 - 32887.156: 99.8166% ( 8) 00:09:36.150 32887.156 - 33125.469: 99.8804% ( 8) 00:09:36.150 33125.469 - 33363.782: 99.9442% ( 8) 00:09:36.150 33363.782 - 33602.095: 100.0000% ( 7) 00:09:36.150 00:09:36.150 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:36.150 ============================================================================== 00:09:36.150 Range in us Cumulative IO count 00:09:36.150 8102.633 - 8162.211: 0.0159% ( 2) 00:09:36.150 8162.211 - 8221.789: 0.0638% ( 6) 00:09:36.150 8221.789 - 8281.367: 0.1116% ( 6) 00:09:36.150 8281.367 - 8340.945: 0.2631% ( 19) 00:09:36.150 8340.945 - 8400.524: 0.6298% ( 46) 00:09:36.150 8400.524 - 8460.102: 1.2994% ( 84) 00:09:36.150 8460.102 - 8519.680: 2.1365% ( 105) 00:09:36.150 8519.680 - 8579.258: 3.0054% ( 109) 00:09:36.150 8579.258 - 8638.836: 3.9939% ( 124) 00:09:36.150 8638.836 - 8698.415: 5.2136% ( 153) 00:09:36.150 8698.415 - 8757.993: 6.7283% ( 190) 00:09:36.150 8757.993 - 8817.571: 8.1792% ( 182) 00:09:36.150 8817.571 - 8877.149: 9.9809% ( 226) 00:09:36.151 8877.149 - 8936.727: 11.4955% ( 190) 00:09:36.151 8936.727 - 8996.305: 12.9943% ( 188) 00:09:36.151 8996.305 - 9055.884: 14.6684% ( 210) 00:09:36.151 9055.884 - 9115.462: 16.6773% ( 252) 00:09:36.151 9115.462 - 9175.040: 18.6783% ( 251) 00:09:36.151 9175.040 - 9234.618: 20.5756% ( 238) 00:09:36.151 9234.618 - 9294.196: 22.4330% ( 233) 00:09:36.151 9294.196 - 9353.775: 24.9043% ( 310) 00:09:36.151 9353.775 - 9413.353: 27.3517% ( 307) 00:09:36.151 9413.353 - 9472.931: 29.6078% ( 283) 00:09:36.151 9472.931 - 9532.509: 32.0711% ( 309) 00:09:36.151 9532.509 - 9592.087: 34.8453% ( 348) 00:09:36.151 9592.087 - 9651.665: 38.0182% ( 398) 00:09:36.151 9651.665 - 9711.244: 40.7844% ( 347) 00:09:36.151 9711.244 - 9770.822: 43.7978% ( 378) 00:09:36.151 9770.822 - 9830.400: 46.8670% ( 385) 00:09:36.151 9830.400 - 9889.978: 49.9601% ( 388) 00:09:36.151 9889.978 - 9949.556: 52.6786% ( 341) 00:09:36.151 9949.556 - 10009.135: 55.5006% ( 354) 00:09:36.151 10009.135 - 10068.713: 58.1792% ( 336) 00:09:36.151 10068.713 - 10128.291: 60.7382% ( 321) 00:09:36.151 10128.291 - 10187.869: 63.0580% ( 291) 00:09:36.151 10187.869 - 10247.447: 65.1706% ( 265) 00:09:36.151 10247.447 - 10307.025: 67.3549% ( 274) 00:09:36.151 10307.025 - 10366.604: 69.5791% ( 279) 00:09:36.151 10366.604 - 10426.182: 71.6757% ( 263) 00:09:36.151 10426.182 - 10485.760: 73.5890% ( 240) 00:09:36.151 10485.760 - 10545.338: 75.5660% ( 248) 00:09:36.151 10545.338 - 10604.916: 77.3677% ( 226) 00:09:36.151 10604.916 - 10664.495: 79.2331% ( 234) 00:09:36.151 10664.495 - 10724.073: 80.9949% ( 221) 00:09:36.151 10724.073 - 10783.651: 82.7248% ( 217) 00:09:36.151 10783.651 - 10843.229: 84.2953% ( 197) 00:09:36.151 10843.229 - 10902.807: 85.7143% ( 178) 00:09:36.151 10902.807 - 10962.385: 86.8383% ( 141) 00:09:36.151 10962.385 - 11021.964: 87.8109% ( 122) 00:09:36.151 11021.964 - 11081.542: 88.5682% ( 95) 00:09:36.151 11081.542 - 11141.120: 89.2698% ( 88) 00:09:36.151 11141.120 - 11200.698: 89.9075% ( 80) 00:09:36.151 11200.698 - 11260.276: 90.3619% ( 57) 00:09:36.151 11260.276 - 11319.855: 90.8402% ( 60) 00:09:36.151 11319.855 - 11379.433: 91.1830% ( 43) 00:09:36.151 11379.433 - 11439.011: 91.6773% ( 62) 00:09:36.151 11439.011 - 11498.589: 92.1078% ( 54) 00:09:36.151 11498.589 - 11558.167: 92.4346% ( 41) 00:09:36.151 11558.167 - 11617.745: 92.7455% ( 39) 00:09:36.151 11617.745 - 11677.324: 93.0644% ( 40) 00:09:36.151 11677.324 - 11736.902: 93.4311% ( 46) 00:09:36.151 11736.902 - 11796.480: 93.6623% ( 29) 00:09:36.151 11796.480 - 11856.058: 93.9413% ( 35) 00:09:36.151 11856.058 - 11915.636: 94.2363% ( 37) 00:09:36.151 11915.636 - 11975.215: 94.4515% ( 27) 00:09:36.151 11975.215 - 12034.793: 94.7465% ( 37) 00:09:36.151 12034.793 - 12094.371: 94.9777% ( 29) 00:09:36.151 12094.371 - 12153.949: 95.2009% ( 28) 00:09:36.151 12153.949 - 12213.527: 95.4321% ( 29) 00:09:36.151 12213.527 - 12273.105: 95.6314% ( 25) 00:09:36.151 12273.105 - 12332.684: 95.8227% ( 24) 00:09:36.151 12332.684 - 12392.262: 96.0220% ( 25) 00:09:36.151 12392.262 - 12451.840: 96.1735% ( 19) 00:09:36.151 12451.840 - 12511.418: 96.2771% ( 13) 00:09:36.151 12511.418 - 12570.996: 96.3887% ( 14) 00:09:36.151 12570.996 - 12630.575: 96.4605% ( 9) 00:09:36.151 12630.575 - 12690.153: 96.7156% ( 32) 00:09:36.151 12690.153 - 12749.731: 96.8670% ( 19) 00:09:36.151 12749.731 - 12809.309: 96.9946% ( 16) 00:09:36.151 12809.309 - 12868.887: 97.0982% ( 13) 00:09:36.151 12868.887 - 12928.465: 97.1859% ( 11) 00:09:36.151 12928.465 - 12988.044: 97.2975% ( 14) 00:09:36.151 12988.044 - 13047.622: 97.4091% ( 14) 00:09:36.151 13047.622 - 13107.200: 97.5128% ( 13) 00:09:36.151 13107.200 - 13166.778: 97.5925% ( 10) 00:09:36.151 13166.778 - 13226.356: 97.6722% ( 10) 00:09:36.151 13226.356 - 13285.935: 97.7519% ( 10) 00:09:36.151 13285.935 - 13345.513: 97.8157% ( 8) 00:09:36.151 13345.513 - 13405.091: 97.8795% ( 8) 00:09:36.151 13405.091 - 13464.669: 97.9353% ( 7) 00:09:36.151 13464.669 - 13524.247: 98.0070% ( 9) 00:09:36.151 13524.247 - 13583.825: 98.0469% ( 5) 00:09:36.151 13583.825 - 13643.404: 98.0788% ( 4) 00:09:36.151 13643.404 - 13702.982: 98.1425% ( 8) 00:09:36.151 13702.982 - 13762.560: 98.2063% ( 8) 00:09:36.151 13762.560 - 13822.138: 98.2701% ( 8) 00:09:36.151 13822.138 - 13881.716: 98.3179% ( 6) 00:09:36.151 13881.716 - 13941.295: 98.3817% ( 8) 00:09:36.151 13941.295 - 14000.873: 98.4136% ( 4) 00:09:36.151 14000.873 - 14060.451: 98.4295% ( 2) 00:09:36.151 14060.451 - 14120.029: 98.4534% ( 3) 00:09:36.151 14239.185 - 14298.764: 98.4694% ( 2) 00:09:36.151 15966.953 - 16086.109: 98.5172% ( 6) 00:09:36.151 16086.109 - 16205.265: 98.5651% ( 6) 00:09:36.151 16205.265 - 16324.422: 98.6209% ( 7) 00:09:36.151 16324.422 - 16443.578: 98.6607% ( 5) 00:09:36.151 16443.578 - 16562.735: 98.7006% ( 5) 00:09:36.151 16562.735 - 16681.891: 98.7404% ( 5) 00:09:36.151 16681.891 - 16801.047: 98.7803% ( 5) 00:09:36.151 16801.047 - 16920.204: 98.8042% ( 3) 00:09:36.151 16920.204 - 17039.360: 98.8600% ( 7) 00:09:36.151 17039.360 - 17158.516: 98.8999% ( 5) 00:09:36.151 17158.516 - 17277.673: 98.9477% ( 6) 00:09:36.151 17277.673 - 17396.829: 98.9796% ( 4) 00:09:36.151 22997.178 - 23116.335: 99.0195% ( 5) 00:09:36.151 23116.335 - 23235.491: 99.0354% ( 2) 00:09:36.151 23235.491 - 23354.647: 99.0673% ( 4) 00:09:36.151 23354.647 - 23473.804: 99.0992% ( 4) 00:09:36.151 23473.804 - 23592.960: 99.1311% ( 4) 00:09:36.151 23592.960 - 23712.116: 99.1550% ( 3) 00:09:36.151 23712.116 - 23831.273: 99.1869% ( 4) 00:09:36.151 23831.273 - 23950.429: 99.2108% ( 3) 00:09:36.151 23950.429 - 24069.585: 99.2427% ( 4) 00:09:36.151 24069.585 - 24188.742: 99.2746% ( 4) 00:09:36.151 24188.742 - 24307.898: 99.2985% ( 3) 00:09:36.151 24307.898 - 24427.055: 99.3224% ( 3) 00:09:36.151 24427.055 - 24546.211: 99.3543% ( 4) 00:09:36.151 24546.211 - 24665.367: 99.3862% ( 4) 00:09:36.151 24665.367 - 24784.524: 99.4180% ( 4) 00:09:36.151 24784.524 - 24903.680: 99.4340% ( 2) 00:09:36.151 24903.680 - 25022.836: 99.4579% ( 3) 00:09:36.151 25022.836 - 25141.993: 99.4898% ( 4) 00:09:36.151 29789.091 - 29908.247: 99.5057% ( 2) 00:09:36.151 29908.247 - 30027.404: 99.5376% ( 4) 00:09:36.151 30027.404 - 30146.560: 99.5615% ( 3) 00:09:36.151 30146.560 - 30265.716: 99.5934% ( 4) 00:09:36.151 30265.716 - 30384.873: 99.6173% ( 3) 00:09:36.151 30384.873 - 30504.029: 99.6333% ( 2) 00:09:36.151 30504.029 - 30742.342: 99.6811% ( 6) 00:09:36.151 30742.342 - 30980.655: 99.7449% ( 8) 00:09:36.151 30980.655 - 31218.967: 99.7927% ( 6) 00:09:36.151 31218.967 - 31457.280: 99.8565% ( 8) 00:09:36.151 31457.280 - 31695.593: 99.9203% ( 8) 00:09:36.151 31695.593 - 31933.905: 99.9761% ( 7) 00:09:36.151 31933.905 - 32172.218: 100.0000% ( 3) 00:09:36.151 00:09:36.151 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:36.151 ============================================================================== 00:09:36.151 Range in us Cumulative IO count 00:09:36.151 8281.367 - 8340.945: 0.0638% ( 8) 00:09:36.151 8340.945 - 8400.524: 0.1754% ( 14) 00:09:36.151 8400.524 - 8460.102: 0.2950% ( 15) 00:09:36.151 8460.102 - 8519.680: 0.5182% ( 28) 00:09:36.151 8519.680 - 8579.258: 0.9885% ( 59) 00:09:36.151 8579.258 - 8638.836: 1.7777% ( 99) 00:09:36.151 8638.836 - 8698.415: 2.6547% ( 110) 00:09:36.151 8698.415 - 8757.993: 3.7867% ( 142) 00:09:36.151 8757.993 - 8817.571: 5.3332% ( 194) 00:09:36.151 8817.571 - 8877.149: 7.1030% ( 222) 00:09:36.151 8877.149 - 8936.727: 8.8728% ( 222) 00:09:36.151 8936.727 - 8996.305: 10.7143% ( 231) 00:09:36.151 8996.305 - 9055.884: 12.7950% ( 261) 00:09:36.151 9055.884 - 9115.462: 15.0749% ( 286) 00:09:36.151 9115.462 - 9175.040: 17.6897% ( 328) 00:09:36.151 9175.040 - 9234.618: 20.0973% ( 302) 00:09:36.151 9234.618 - 9294.196: 22.7280% ( 330) 00:09:36.151 9294.196 - 9353.775: 25.0478% ( 291) 00:09:36.151 9353.775 - 9413.353: 27.3996% ( 295) 00:09:36.151 9413.353 - 9472.931: 29.8788% ( 311) 00:09:36.151 9472.931 - 9532.509: 32.3342% ( 308) 00:09:36.151 9532.509 - 9592.087: 34.5584% ( 279) 00:09:36.151 9592.087 - 9651.665: 37.0217% ( 309) 00:09:36.151 9651.665 - 9711.244: 39.4053% ( 299) 00:09:36.151 9711.244 - 9770.822: 42.3310% ( 367) 00:09:36.151 9770.822 - 9830.400: 45.5517% ( 404) 00:09:36.151 9830.400 - 9889.978: 48.5810% ( 380) 00:09:36.151 9889.978 - 9949.556: 51.7379% ( 396) 00:09:36.151 9949.556 - 10009.135: 55.0303% ( 413) 00:09:36.151 10009.135 - 10068.713: 58.1872% ( 396) 00:09:36.151 10068.713 - 10128.291: 61.4955% ( 415) 00:09:36.151 10128.291 - 10187.869: 64.6046% ( 390) 00:09:36.151 10187.869 - 10247.447: 67.3788% ( 348) 00:09:36.151 10247.447 - 10307.025: 69.9936% ( 328) 00:09:36.151 10307.025 - 10366.604: 72.3613% ( 297) 00:09:36.151 10366.604 - 10426.182: 74.8007% ( 306) 00:09:36.151 10426.182 - 10485.760: 76.7618% ( 246) 00:09:36.151 10485.760 - 10545.338: 78.7468% ( 249) 00:09:36.151 10545.338 - 10604.916: 80.6920% ( 244) 00:09:36.151 10604.916 - 10664.495: 82.3262% ( 205) 00:09:36.151 10664.495 - 10724.073: 83.8887% ( 196) 00:09:36.151 10724.073 - 10783.651: 85.0845% ( 150) 00:09:36.151 10783.651 - 10843.229: 86.1049% ( 128) 00:09:36.151 10843.229 - 10902.807: 86.9898% ( 111) 00:09:36.151 10902.807 - 10962.385: 87.8189% ( 104) 00:09:36.151 10962.385 - 11021.964: 88.4487% ( 79) 00:09:36.151 11021.964 - 11081.542: 89.0625% ( 77) 00:09:36.151 11081.542 - 11141.120: 89.5408% ( 60) 00:09:36.151 11141.120 - 11200.698: 90.0032% ( 58) 00:09:36.151 11200.698 - 11260.276: 90.4177% ( 52) 00:09:36.151 11260.276 - 11319.855: 90.8482% ( 54) 00:09:36.151 11319.855 - 11379.433: 91.2946% ( 56) 00:09:36.151 11379.433 - 11439.011: 91.5737% ( 35) 00:09:36.151 11439.011 - 11498.589: 91.8527% ( 35) 00:09:36.151 11498.589 - 11558.167: 92.1795% ( 41) 00:09:36.151 11558.167 - 11617.745: 92.5462% ( 46) 00:09:36.151 11617.745 - 11677.324: 92.8332% ( 36) 00:09:36.151 11677.324 - 11736.902: 93.2159% ( 48) 00:09:36.151 11736.902 - 11796.480: 93.4869% ( 34) 00:09:36.151 11796.480 - 11856.058: 93.7739% ( 36) 00:09:36.151 11856.058 - 11915.636: 94.0450% ( 34) 00:09:36.151 11915.636 - 11975.215: 94.3399% ( 37) 00:09:36.152 11975.215 - 12034.793: 94.5233% ( 23) 00:09:36.152 12034.793 - 12094.371: 94.6668% ( 18) 00:09:36.152 12094.371 - 12153.949: 94.8103% ( 18) 00:09:36.152 12153.949 - 12213.527: 95.0016% ( 24) 00:09:36.152 12213.527 - 12273.105: 95.2089% ( 26) 00:09:36.152 12273.105 - 12332.684: 95.4241% ( 27) 00:09:36.152 12332.684 - 12392.262: 95.6473% ( 28) 00:09:36.152 12392.262 - 12451.840: 95.8466% ( 25) 00:09:36.152 12451.840 - 12511.418: 96.1496% ( 38) 00:09:36.152 12511.418 - 12570.996: 96.3249% ( 22) 00:09:36.152 12570.996 - 12630.575: 96.4365% ( 14) 00:09:36.152 12630.575 - 12690.153: 96.5721% ( 17) 00:09:36.152 12690.153 - 12749.731: 96.7714% ( 25) 00:09:36.152 12749.731 - 12809.309: 96.8591% ( 11) 00:09:36.152 12809.309 - 12868.887: 96.9627% ( 13) 00:09:36.152 12868.887 - 12928.465: 97.1142% ( 19) 00:09:36.152 12928.465 - 12988.044: 97.2417% ( 16) 00:09:36.152 12988.044 - 13047.622: 97.3693% ( 16) 00:09:36.152 13047.622 - 13107.200: 97.4649% ( 12) 00:09:36.152 13107.200 - 13166.778: 97.5446% ( 10) 00:09:36.152 13166.778 - 13226.356: 97.6483% ( 13) 00:09:36.152 13226.356 - 13285.935: 97.8476% ( 25) 00:09:36.152 13285.935 - 13345.513: 97.9353% ( 11) 00:09:36.152 13345.513 - 13405.091: 98.0070% ( 9) 00:09:36.152 13405.091 - 13464.669: 98.0389% ( 4) 00:09:36.152 13464.669 - 13524.247: 98.0947% ( 7) 00:09:36.152 13524.247 - 13583.825: 98.1346% ( 5) 00:09:36.152 13583.825 - 13643.404: 98.1983% ( 8) 00:09:36.152 13643.404 - 13702.982: 98.2462% ( 6) 00:09:36.152 13702.982 - 13762.560: 98.3020% ( 7) 00:09:36.152 13762.560 - 13822.138: 98.3259% ( 3) 00:09:36.152 13822.138 - 13881.716: 98.3418% ( 2) 00:09:36.152 13881.716 - 13941.295: 98.3578% ( 2) 00:09:36.152 13941.295 - 14000.873: 98.3737% ( 2) 00:09:36.152 14000.873 - 14060.451: 98.3897% ( 2) 00:09:36.152 14060.451 - 14120.029: 98.3976% ( 1) 00:09:36.152 14120.029 - 14179.607: 98.4136% ( 2) 00:09:36.152 14179.607 - 14239.185: 98.4295% ( 2) 00:09:36.152 14239.185 - 14298.764: 98.4455% ( 2) 00:09:36.152 14358.342 - 14417.920: 98.4614% ( 2) 00:09:36.152 14477.498 - 14537.076: 98.4694% ( 1) 00:09:36.152 16205.265 - 16324.422: 98.5013% ( 4) 00:09:36.152 16324.422 - 16443.578: 98.5651% ( 8) 00:09:36.152 16443.578 - 16562.735: 98.6607% ( 12) 00:09:36.152 16562.735 - 16681.891: 98.8122% ( 19) 00:09:36.152 16681.891 - 16801.047: 98.8600% ( 6) 00:09:36.152 16801.047 - 16920.204: 98.8999% ( 5) 00:09:36.152 16920.204 - 17039.360: 98.9397% ( 5) 00:09:36.152 17039.360 - 17158.516: 98.9796% ( 5) 00:09:36.152 21805.615 - 21924.771: 99.0115% ( 4) 00:09:36.152 21924.771 - 22043.927: 99.1151% ( 13) 00:09:36.152 22043.927 - 22163.084: 99.1629% ( 6) 00:09:36.152 22163.084 - 22282.240: 99.1869% ( 3) 00:09:36.152 22282.240 - 22401.396: 99.2188% ( 4) 00:09:36.152 22401.396 - 22520.553: 99.2427% ( 3) 00:09:36.152 22520.553 - 22639.709: 99.2666% ( 3) 00:09:36.152 22639.709 - 22758.865: 99.2985% ( 4) 00:09:36.152 22758.865 - 22878.022: 99.3304% ( 4) 00:09:36.152 22878.022 - 22997.178: 99.3622% ( 4) 00:09:36.152 22997.178 - 23116.335: 99.3941% ( 4) 00:09:36.152 23116.335 - 23235.491: 99.4260% ( 4) 00:09:36.152 23235.491 - 23354.647: 99.4659% ( 5) 00:09:36.152 23354.647 - 23473.804: 99.4898% ( 3) 00:09:36.152 28240.058 - 28359.215: 99.5057% ( 2) 00:09:36.152 28359.215 - 28478.371: 99.5376% ( 4) 00:09:36.152 28478.371 - 28597.527: 99.5775% ( 5) 00:09:36.152 28597.527 - 28716.684: 99.6094% ( 4) 00:09:36.152 28716.684 - 28835.840: 99.6413% ( 4) 00:09:36.152 28835.840 - 28954.996: 99.6732% ( 4) 00:09:36.152 28954.996 - 29074.153: 99.7050% ( 4) 00:09:36.152 29074.153 - 29193.309: 99.7290% ( 3) 00:09:36.152 29193.309 - 29312.465: 99.7688% ( 5) 00:09:36.152 29312.465 - 29431.622: 99.8007% ( 4) 00:09:36.152 29431.622 - 29550.778: 99.8326% ( 4) 00:09:36.152 29550.778 - 29669.935: 99.8645% ( 4) 00:09:36.152 29669.935 - 29789.091: 99.8964% ( 4) 00:09:36.152 29789.091 - 29908.247: 99.9283% ( 4) 00:09:36.152 29908.247 - 30027.404: 99.9601% ( 4) 00:09:36.152 30027.404 - 30146.560: 99.9920% ( 4) 00:09:36.152 30146.560 - 30265.716: 100.0000% ( 1) 00:09:36.152 00:09:36.152 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:36.152 ============================================================================== 00:09:36.152 Range in us Cumulative IO count 00:09:36.152 8162.211 - 8221.789: 0.0159% ( 2) 00:09:36.152 8281.367 - 8340.945: 0.0399% ( 3) 00:09:36.152 8340.945 - 8400.524: 0.1276% ( 11) 00:09:36.152 8400.524 - 8460.102: 0.3747% ( 31) 00:09:36.152 8460.102 - 8519.680: 0.6617% ( 36) 00:09:36.152 8519.680 - 8579.258: 1.1161% ( 57) 00:09:36.152 8579.258 - 8638.836: 1.7538% ( 80) 00:09:36.152 8638.836 - 8698.415: 2.6547% ( 113) 00:09:36.152 8698.415 - 8757.993: 3.8425% ( 149) 00:09:36.152 8757.993 - 8817.571: 5.2774% ( 180) 00:09:36.152 8817.571 - 8877.149: 6.9276% ( 207) 00:09:36.152 8877.149 - 8936.727: 8.6894% ( 221) 00:09:36.152 8936.727 - 8996.305: 10.5867% ( 238) 00:09:36.152 8996.305 - 9055.884: 12.5239% ( 243) 00:09:36.152 9055.884 - 9115.462: 14.6923% ( 272) 00:09:36.152 9115.462 - 9175.040: 17.0599% ( 297) 00:09:36.152 9175.040 - 9234.618: 19.6508% ( 325) 00:09:36.152 9234.618 - 9294.196: 22.1221% ( 310) 00:09:36.152 9294.196 - 9353.775: 24.6811% ( 321) 00:09:36.152 9353.775 - 9413.353: 27.0408% ( 296) 00:09:36.152 9413.353 - 9472.931: 29.4962% ( 308) 00:09:36.152 9472.931 - 9532.509: 31.9117% ( 303) 00:09:36.152 9532.509 - 9592.087: 34.3750% ( 309) 00:09:36.152 9592.087 - 9651.665: 37.1572% ( 349) 00:09:36.152 9651.665 - 9711.244: 39.9235% ( 347) 00:09:36.152 9711.244 - 9770.822: 43.1521% ( 405) 00:09:36.152 9770.822 - 9830.400: 45.9582% ( 352) 00:09:36.152 9830.400 - 9889.978: 48.8520% ( 363) 00:09:36.152 9889.978 - 9949.556: 52.0488% ( 401) 00:09:36.152 9949.556 - 10009.135: 55.1339% ( 387) 00:09:36.152 10009.135 - 10068.713: 57.9640% ( 355) 00:09:36.152 10068.713 - 10128.291: 60.8897% ( 367) 00:09:36.152 10128.291 - 10187.869: 63.9110% ( 379) 00:09:36.152 10187.869 - 10247.447: 66.7650% ( 358) 00:09:36.152 10247.447 - 10307.025: 69.6110% ( 357) 00:09:36.152 10307.025 - 10366.604: 72.2895% ( 336) 00:09:36.152 10366.604 - 10426.182: 74.7449% ( 308) 00:09:36.152 10426.182 - 10485.760: 76.9372% ( 275) 00:09:36.152 10485.760 - 10545.338: 78.9780% ( 256) 00:09:36.152 10545.338 - 10604.916: 80.7159% ( 218) 00:09:36.152 10604.916 - 10664.495: 82.2864% ( 197) 00:09:36.152 10664.495 - 10724.073: 83.6735% ( 174) 00:09:36.152 10724.073 - 10783.651: 84.9251% ( 157) 00:09:36.152 10783.651 - 10843.229: 86.0651% ( 143) 00:09:36.152 10843.229 - 10902.807: 87.1333% ( 134) 00:09:36.152 10902.807 - 10962.385: 87.9863% ( 107) 00:09:36.152 10962.385 - 11021.964: 88.7117% ( 91) 00:09:36.152 11021.964 - 11081.542: 89.3814% ( 84) 00:09:36.152 11081.542 - 11141.120: 90.0032% ( 78) 00:09:36.152 11141.120 - 11200.698: 90.5533% ( 69) 00:09:36.152 11200.698 - 11260.276: 90.9279% ( 47) 00:09:36.152 11260.276 - 11319.855: 91.3186% ( 49) 00:09:36.152 11319.855 - 11379.433: 91.7331% ( 52) 00:09:36.152 11379.433 - 11439.011: 92.0520% ( 40) 00:09:36.152 11439.011 - 11498.589: 92.3469% ( 37) 00:09:36.152 11498.589 - 11558.167: 92.6499% ( 38) 00:09:36.152 11558.167 - 11617.745: 92.9369% ( 36) 00:09:36.152 11617.745 - 11677.324: 93.2478% ( 39) 00:09:36.152 11677.324 - 11736.902: 93.5746% ( 41) 00:09:36.152 11736.902 - 11796.480: 93.8776% ( 38) 00:09:36.152 11796.480 - 11856.058: 94.1406% ( 33) 00:09:36.152 11856.058 - 11915.636: 94.3798% ( 30) 00:09:36.152 11915.636 - 11975.215: 94.5950% ( 27) 00:09:36.152 11975.215 - 12034.793: 94.8980% ( 38) 00:09:36.152 12034.793 - 12094.371: 95.1291% ( 29) 00:09:36.152 12094.371 - 12153.949: 95.3763% ( 31) 00:09:36.152 12153.949 - 12213.527: 95.6792% ( 38) 00:09:36.152 12213.527 - 12273.105: 95.9503% ( 34) 00:09:36.152 12273.105 - 12332.684: 96.1416% ( 24) 00:09:36.152 12332.684 - 12392.262: 96.2930% ( 19) 00:09:36.152 12392.262 - 12451.840: 96.3967% ( 13) 00:09:36.152 12451.840 - 12511.418: 96.5163% ( 15) 00:09:36.152 12511.418 - 12570.996: 96.5960% ( 10) 00:09:36.152 12570.996 - 12630.575: 96.6996% ( 13) 00:09:36.152 12630.575 - 12690.153: 96.8033% ( 13) 00:09:36.152 12690.153 - 12749.731: 96.8830% ( 10) 00:09:36.152 12749.731 - 12809.309: 96.9547% ( 9) 00:09:36.152 12809.309 - 12868.887: 97.0265% ( 9) 00:09:36.152 12868.887 - 12928.465: 97.1142% ( 11) 00:09:36.152 12928.465 - 12988.044: 97.1939% ( 10) 00:09:36.152 12988.044 - 13047.622: 97.2736% ( 10) 00:09:36.152 13047.622 - 13107.200: 97.3374% ( 8) 00:09:36.152 13107.200 - 13166.778: 97.4171% ( 10) 00:09:36.152 13166.778 - 13226.356: 97.5048% ( 11) 00:09:36.152 13226.356 - 13285.935: 97.5765% ( 9) 00:09:36.152 13285.935 - 13345.513: 97.6164% ( 5) 00:09:36.152 13345.513 - 13405.091: 97.6722% ( 7) 00:09:36.152 13405.091 - 13464.669: 97.8157% ( 18) 00:09:36.152 13464.669 - 13524.247: 97.8555% ( 5) 00:09:36.152 13524.247 - 13583.825: 97.9114% ( 7) 00:09:36.152 13583.825 - 13643.404: 97.9432% ( 4) 00:09:36.152 13643.404 - 13702.982: 97.9911% ( 6) 00:09:36.152 13702.982 - 13762.560: 98.0389% ( 6) 00:09:36.152 13762.560 - 13822.138: 98.0867% ( 6) 00:09:36.152 13822.138 - 13881.716: 98.1425% ( 7) 00:09:36.152 13881.716 - 13941.295: 98.1904% ( 6) 00:09:36.152 13941.295 - 14000.873: 98.2063% ( 2) 00:09:36.152 14000.873 - 14060.451: 98.2462% ( 5) 00:09:36.152 14060.451 - 14120.029: 98.2940% ( 6) 00:09:36.152 14120.029 - 14179.607: 98.3578% ( 8) 00:09:36.152 14179.607 - 14239.185: 98.3737% ( 2) 00:09:36.152 14239.185 - 14298.764: 98.3817% ( 1) 00:09:36.152 14298.764 - 14358.342: 98.4056% ( 3) 00:09:36.152 14358.342 - 14417.920: 98.4216% ( 2) 00:09:36.152 14417.920 - 14477.498: 98.4375% ( 2) 00:09:36.152 14477.498 - 14537.076: 98.4534% ( 2) 00:09:36.152 14537.076 - 14596.655: 98.4694% ( 2) 00:09:36.152 16086.109 - 16205.265: 98.5172% ( 6) 00:09:36.152 16205.265 - 16324.422: 98.5890% ( 9) 00:09:36.152 16324.422 - 16443.578: 98.6687% ( 10) 00:09:36.152 16443.578 - 16562.735: 98.7962% ( 16) 00:09:36.152 16562.735 - 16681.891: 98.8441% ( 6) 00:09:36.152 16681.891 - 16801.047: 98.8839% ( 5) 00:09:36.152 16801.047 - 16920.204: 98.9318% ( 6) 00:09:36.152 16920.204 - 17039.360: 98.9716% ( 5) 00:09:36.152 17039.360 - 17158.516: 98.9796% ( 1) 00:09:36.152 20614.051 - 20733.207: 99.0115% ( 4) 00:09:36.153 20733.207 - 20852.364: 99.0354% ( 3) 00:09:36.153 20852.364 - 20971.520: 99.0673% ( 4) 00:09:36.153 20971.520 - 21090.676: 99.0992% ( 4) 00:09:36.153 21090.676 - 21209.833: 99.1311% ( 4) 00:09:36.153 21209.833 - 21328.989: 99.1629% ( 4) 00:09:36.153 21328.989 - 21448.145: 99.1948% ( 4) 00:09:36.153 21448.145 - 21567.302: 99.2188% ( 3) 00:09:36.153 21567.302 - 21686.458: 99.2506% ( 4) 00:09:36.153 21686.458 - 21805.615: 99.2905% ( 5) 00:09:36.153 21805.615 - 21924.771: 99.3224% ( 4) 00:09:36.153 21924.771 - 22043.927: 99.3543% ( 4) 00:09:36.153 22043.927 - 22163.084: 99.3862% ( 4) 00:09:36.153 22163.084 - 22282.240: 99.4180% ( 4) 00:09:36.153 22282.240 - 22401.396: 99.4499% ( 4) 00:09:36.153 22401.396 - 22520.553: 99.4898% ( 5) 00:09:36.153 27048.495 - 27167.651: 99.5137% ( 3) 00:09:36.153 27167.651 - 27286.807: 99.5536% ( 5) 00:09:36.153 27286.807 - 27405.964: 99.5775% ( 3) 00:09:36.153 27405.964 - 27525.120: 99.6173% ( 5) 00:09:36.153 27525.120 - 27644.276: 99.6413% ( 3) 00:09:36.153 27644.276 - 27763.433: 99.6811% ( 5) 00:09:36.153 27763.433 - 27882.589: 99.7130% ( 4) 00:09:36.153 27882.589 - 28001.745: 99.7449% ( 4) 00:09:36.153 28001.745 - 28120.902: 99.7768% ( 4) 00:09:36.153 28120.902 - 28240.058: 99.7927% ( 2) 00:09:36.153 28240.058 - 28359.215: 99.8246% ( 4) 00:09:36.153 28359.215 - 28478.371: 99.8485% ( 3) 00:09:36.153 28478.371 - 28597.527: 99.8804% ( 4) 00:09:36.153 28597.527 - 28716.684: 99.9043% ( 3) 00:09:36.153 28716.684 - 28835.840: 99.9442% ( 5) 00:09:36.153 28835.840 - 28954.996: 99.9761% ( 4) 00:09:36.153 28954.996 - 29074.153: 100.0000% ( 3) 00:09:36.153 00:09:36.153 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:36.153 ============================================================================== 00:09:36.153 Range in us Cumulative IO count 00:09:36.153 8221.789 - 8281.367: 0.0159% ( 2) 00:09:36.153 8281.367 - 8340.945: 0.0638% ( 6) 00:09:36.153 8340.945 - 8400.524: 0.1355% ( 9) 00:09:36.153 8400.524 - 8460.102: 0.1594% ( 3) 00:09:36.153 8460.102 - 8519.680: 0.3189% ( 20) 00:09:36.153 8519.680 - 8579.258: 0.7334% ( 52) 00:09:36.153 8579.258 - 8638.836: 1.4031% ( 84) 00:09:36.153 8638.836 - 8698.415: 2.3597% ( 120) 00:09:36.153 8698.415 - 8757.993: 3.4678% ( 139) 00:09:36.153 8757.993 - 8817.571: 5.0143% ( 194) 00:09:36.153 8817.571 - 8877.149: 6.6167% ( 201) 00:09:36.153 8877.149 - 8936.727: 8.3705% ( 220) 00:09:36.153 8936.727 - 8996.305: 10.3396% ( 247) 00:09:36.153 8996.305 - 9055.884: 12.5319% ( 275) 00:09:36.153 9055.884 - 9115.462: 14.7800% ( 282) 00:09:36.153 9115.462 - 9175.040: 17.1317% ( 295) 00:09:36.153 9175.040 - 9234.618: 19.5552% ( 304) 00:09:36.153 9234.618 - 9294.196: 22.0504% ( 313) 00:09:36.153 9294.196 - 9353.775: 24.7290% ( 336) 00:09:36.153 9353.775 - 9413.353: 27.1046% ( 298) 00:09:36.153 9413.353 - 9472.931: 29.4165% ( 290) 00:09:36.153 9472.931 - 9532.509: 31.6805% ( 284) 00:09:36.153 9532.509 - 9592.087: 33.8568% ( 273) 00:09:36.153 9592.087 - 9651.665: 36.3839% ( 317) 00:09:36.153 9651.665 - 9711.244: 39.0386% ( 333) 00:09:36.153 9711.244 - 9770.822: 42.2672% ( 405) 00:09:36.153 9770.822 - 9830.400: 45.4002% ( 393) 00:09:36.153 9830.400 - 9889.978: 48.4614% ( 384) 00:09:36.153 9889.978 - 9949.556: 51.7379% ( 411) 00:09:36.153 9949.556 - 10009.135: 54.7752% ( 381) 00:09:36.153 10009.135 - 10068.713: 57.9480% ( 398) 00:09:36.153 10068.713 - 10128.291: 60.8897% ( 369) 00:09:36.153 10128.291 - 10187.869: 63.9509% ( 384) 00:09:36.153 10187.869 - 10247.447: 67.2911% ( 419) 00:09:36.153 10247.447 - 10307.025: 70.2248% ( 368) 00:09:36.153 10307.025 - 10366.604: 72.6562% ( 305) 00:09:36.153 10366.604 - 10426.182: 74.9681% ( 290) 00:09:36.153 10426.182 - 10485.760: 77.1445% ( 273) 00:09:36.153 10485.760 - 10545.338: 79.1773% ( 255) 00:09:36.153 10545.338 - 10604.916: 81.0666% ( 237) 00:09:36.153 10604.916 - 10664.495: 82.6849% ( 203) 00:09:36.153 10664.495 - 10724.073: 84.0322% ( 169) 00:09:36.153 10724.073 - 10783.651: 85.2360% ( 151) 00:09:36.153 10783.651 - 10843.229: 86.3202% ( 136) 00:09:36.153 10843.229 - 10902.807: 87.3246% ( 126) 00:09:36.153 10902.807 - 10962.385: 88.1776% ( 107) 00:09:36.153 10962.385 - 11021.964: 88.9429% ( 96) 00:09:36.153 11021.964 - 11081.542: 89.6285% ( 86) 00:09:36.153 11081.542 - 11141.120: 90.2503% ( 78) 00:09:36.153 11141.120 - 11200.698: 90.7526% ( 63) 00:09:36.153 11200.698 - 11260.276: 91.1033% ( 44) 00:09:36.153 11260.276 - 11319.855: 91.4860% ( 48) 00:09:36.153 11319.855 - 11379.433: 91.8607% ( 47) 00:09:36.153 11379.433 - 11439.011: 92.1955% ( 42) 00:09:36.153 11439.011 - 11498.589: 92.5462% ( 44) 00:09:36.153 11498.589 - 11558.167: 92.8571% ( 39) 00:09:36.153 11558.167 - 11617.745: 93.1043% ( 31) 00:09:36.153 11617.745 - 11677.324: 93.3514% ( 31) 00:09:36.153 11677.324 - 11736.902: 93.6065% ( 32) 00:09:36.153 11736.902 - 11796.480: 93.8058% ( 25) 00:09:36.153 11796.480 - 11856.058: 94.0370% ( 29) 00:09:36.153 11856.058 - 11915.636: 94.2522% ( 27) 00:09:36.153 11915.636 - 11975.215: 94.5392% ( 36) 00:09:36.153 11975.215 - 12034.793: 94.8023% ( 33) 00:09:36.153 12034.793 - 12094.371: 95.0415% ( 30) 00:09:36.153 12094.371 - 12153.949: 95.2886% ( 31) 00:09:36.153 12153.949 - 12213.527: 95.4879% ( 25) 00:09:36.153 12213.527 - 12273.105: 95.7191% ( 29) 00:09:36.153 12273.105 - 12332.684: 95.9184% ( 25) 00:09:36.153 12332.684 - 12392.262: 96.0379% ( 15) 00:09:36.153 12392.262 - 12451.840: 96.1655% ( 16) 00:09:36.153 12451.840 - 12511.418: 96.2930% ( 16) 00:09:36.153 12511.418 - 12570.996: 96.4206% ( 16) 00:09:36.153 12570.996 - 12630.575: 96.5402% ( 15) 00:09:36.153 12630.575 - 12690.153: 96.6677% ( 16) 00:09:36.153 12690.153 - 12749.731: 96.8511% ( 23) 00:09:36.153 12749.731 - 12809.309: 96.9946% ( 18) 00:09:36.153 12809.309 - 12868.887: 97.1540% ( 20) 00:09:36.153 12868.887 - 12928.465: 97.2258% ( 9) 00:09:36.153 12928.465 - 12988.044: 97.2895% ( 8) 00:09:36.153 12988.044 - 13047.622: 97.3453% ( 7) 00:09:36.153 13047.622 - 13107.200: 97.4011% ( 7) 00:09:36.153 13107.200 - 13166.778: 97.4888% ( 11) 00:09:36.153 13166.778 - 13226.356: 97.5367% ( 6) 00:09:36.153 13226.356 - 13285.935: 97.5765% ( 5) 00:09:36.153 13285.935 - 13345.513: 97.6084% ( 4) 00:09:36.153 13345.513 - 13405.091: 97.6642% ( 7) 00:09:36.153 13405.091 - 13464.669: 97.7758% ( 14) 00:09:36.153 13464.669 - 13524.247: 97.8316% ( 7) 00:09:36.153 13524.247 - 13583.825: 97.8635% ( 4) 00:09:36.153 13583.825 - 13643.404: 97.8874% ( 3) 00:09:36.153 13643.404 - 13702.982: 97.9193% ( 4) 00:09:36.153 13702.982 - 13762.560: 97.9672% ( 6) 00:09:36.153 13762.560 - 13822.138: 97.9911% ( 3) 00:09:36.153 13822.138 - 13881.716: 98.0469% ( 7) 00:09:36.153 13881.716 - 13941.295: 98.0788% ( 4) 00:09:36.153 13941.295 - 14000.873: 98.0947% ( 2) 00:09:36.153 14000.873 - 14060.451: 98.1266% ( 4) 00:09:36.153 14060.451 - 14120.029: 98.1425% ( 2) 00:09:36.153 14120.029 - 14179.607: 98.1983% ( 7) 00:09:36.153 14179.607 - 14239.185: 98.2223% ( 3) 00:09:36.153 14239.185 - 14298.764: 98.2541% ( 4) 00:09:36.153 14298.764 - 14358.342: 98.2940% ( 5) 00:09:36.153 14358.342 - 14417.920: 98.3339% ( 5) 00:09:36.153 14417.920 - 14477.498: 98.3418% ( 1) 00:09:36.153 14477.498 - 14537.076: 98.3578% ( 2) 00:09:36.153 14537.076 - 14596.655: 98.3737% ( 2) 00:09:36.153 14596.655 - 14656.233: 98.3897% ( 2) 00:09:36.153 14656.233 - 14715.811: 98.4056% ( 2) 00:09:36.153 14715.811 - 14775.389: 98.4216% ( 2) 00:09:36.153 14775.389 - 14834.967: 98.4375% ( 2) 00:09:36.153 14834.967 - 14894.545: 98.4534% ( 2) 00:09:36.153 14894.545 - 14954.124: 98.4694% ( 2) 00:09:36.153 16086.109 - 16205.265: 98.4774% ( 1) 00:09:36.153 16205.265 - 16324.422: 98.5730% ( 12) 00:09:36.153 16324.422 - 16443.578: 98.6767% ( 13) 00:09:36.153 16443.578 - 16562.735: 98.8441% ( 21) 00:09:36.153 16562.735 - 16681.891: 98.8839% ( 5) 00:09:36.153 16681.891 - 16801.047: 98.9318% ( 6) 00:09:36.153 16801.047 - 16920.204: 98.9716% ( 5) 00:09:36.153 16920.204 - 17039.360: 98.9796% ( 1) 00:09:36.153 19065.018 - 19184.175: 99.0035% ( 3) 00:09:36.153 19184.175 - 19303.331: 99.1071% ( 13) 00:09:36.153 19303.331 - 19422.487: 99.1390% ( 4) 00:09:36.153 19422.487 - 19541.644: 99.1550% ( 2) 00:09:36.153 19541.644 - 19660.800: 99.1789% ( 3) 00:09:36.153 19660.800 - 19779.956: 99.2028% ( 3) 00:09:36.154 19779.956 - 19899.113: 99.2267% ( 3) 00:09:36.154 19899.113 - 20018.269: 99.2427% ( 2) 00:09:36.154 20018.269 - 20137.425: 99.2666% ( 3) 00:09:36.154 20137.425 - 20256.582: 99.2905% ( 3) 00:09:36.154 20256.582 - 20375.738: 99.3304% ( 5) 00:09:36.154 20375.738 - 20494.895: 99.3622% ( 4) 00:09:36.154 20494.895 - 20614.051: 99.3941% ( 4) 00:09:36.154 20614.051 - 20733.207: 99.4260% ( 4) 00:09:36.154 20733.207 - 20852.364: 99.4579% ( 4) 00:09:36.154 20852.364 - 20971.520: 99.4898% ( 4) 00:09:36.154 25380.305 - 25499.462: 99.5137% ( 3) 00:09:36.154 25499.462 - 25618.618: 99.5456% ( 4) 00:09:36.154 25618.618 - 25737.775: 99.5775% ( 4) 00:09:36.154 25737.775 - 25856.931: 99.6094% ( 4) 00:09:36.154 25856.931 - 25976.087: 99.6492% ( 5) 00:09:36.154 25976.087 - 26095.244: 99.6811% ( 4) 00:09:36.154 26095.244 - 26214.400: 99.7130% ( 4) 00:09:36.154 26214.400 - 26333.556: 99.7449% ( 4) 00:09:36.154 26333.556 - 26452.713: 99.7768% ( 4) 00:09:36.154 26452.713 - 26571.869: 99.8087% ( 4) 00:09:36.154 26571.869 - 26691.025: 99.8406% ( 4) 00:09:36.154 26691.025 - 26810.182: 99.8645% ( 3) 00:09:36.154 26810.182 - 26929.338: 99.9043% ( 5) 00:09:36.154 26929.338 - 27048.495: 99.9362% ( 4) 00:09:36.154 27048.495 - 27167.651: 99.9681% ( 4) 00:09:36.154 27167.651 - 27286.807: 100.0000% ( 4) 00:09:36.154 00:09:36.154 ************************************ 00:09:36.154 END TEST nvme_perf 00:09:36.154 ************************************ 00:09:36.154 15:17:49 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:36.154 00:09:36.154 real 0m2.669s 00:09:36.154 user 0m2.280s 00:09:36.154 sys 0m0.287s 00:09:36.154 15:17:49 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:36.154 15:17:49 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:36.154 15:17:49 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:36.154 15:17:49 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:36.154 15:17:49 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:36.154 15:17:49 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.154 15:17:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:36.154 ************************************ 00:09:36.154 START TEST nvme_hello_world 00:09:36.154 ************************************ 00:09:36.154 15:17:49 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:36.412 Initializing NVMe Controllers 00:09:36.412 Attached to 0000:00:11.0 00:09:36.412 Namespace ID: 1 size: 5GB 00:09:36.412 Attached to 0000:00:13.0 00:09:36.412 Namespace ID: 1 size: 1GB 00:09:36.412 Attached to 0000:00:10.0 00:09:36.412 Namespace ID: 1 size: 6GB 00:09:36.412 Attached to 0000:00:12.0 00:09:36.412 Namespace ID: 1 size: 4GB 00:09:36.412 Namespace ID: 2 size: 4GB 00:09:36.412 Namespace ID: 3 size: 4GB 00:09:36.412 Initialization complete. 00:09:36.412 INFO: using host memory buffer for IO 00:09:36.412 Hello world! 00:09:36.412 INFO: using host memory buffer for IO 00:09:36.412 Hello world! 00:09:36.412 INFO: using host memory buffer for IO 00:09:36.412 Hello world! 00:09:36.412 INFO: using host memory buffer for IO 00:09:36.412 Hello world! 00:09:36.412 INFO: using host memory buffer for IO 00:09:36.412 Hello world! 00:09:36.412 INFO: using host memory buffer for IO 00:09:36.412 Hello world! 00:09:36.412 ************************************ 00:09:36.412 END TEST nvme_hello_world 00:09:36.412 ************************************ 00:09:36.412 00:09:36.412 real 0m0.319s 00:09:36.412 user 0m0.136s 00:09:36.412 sys 0m0.130s 00:09:36.412 15:17:49 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:36.412 15:17:49 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:36.670 15:17:50 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:36.670 15:17:50 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:36.670 15:17:50 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:36.670 15:17:50 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.670 15:17:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:36.670 ************************************ 00:09:36.670 START TEST nvme_sgl 00:09:36.670 ************************************ 00:09:36.670 15:17:50 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:36.670 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:36.670 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:36.670 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:36.929 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:36.929 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:36.929 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:36.929 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:36.929 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:36.929 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:36.929 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:36.929 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:36.929 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:36.929 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:36.929 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:36.929 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:36.929 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:36.929 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:36.929 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:36.929 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:36.929 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:36.929 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:36.929 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:36.929 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:36.929 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:36.929 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:36.929 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:36.929 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:36.929 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:36.929 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:36.929 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:36.929 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:36.929 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:36.929 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:36.929 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:36.929 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:36.929 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:36.929 NVMe Readv/Writev Request test 00:09:36.929 Attached to 0000:00:11.0 00:09:36.929 Attached to 0000:00:13.0 00:09:36.929 Attached to 0000:00:10.0 00:09:36.929 Attached to 0000:00:12.0 00:09:36.929 0000:00:11.0: build_io_request_2 test passed 00:09:36.929 0000:00:11.0: build_io_request_4 test passed 00:09:36.929 0000:00:11.0: build_io_request_5 test passed 00:09:36.929 0000:00:11.0: build_io_request_6 test passed 00:09:36.929 0000:00:11.0: build_io_request_7 test passed 00:09:36.929 0000:00:11.0: build_io_request_10 test passed 00:09:36.929 0000:00:10.0: build_io_request_2 test passed 00:09:36.929 0000:00:10.0: build_io_request_4 test passed 00:09:36.929 0000:00:10.0: build_io_request_5 test passed 00:09:36.929 0000:00:10.0: build_io_request_6 test passed 00:09:36.929 0000:00:10.0: build_io_request_7 test passed 00:09:36.929 0000:00:10.0: build_io_request_10 test passed 00:09:36.929 Cleaning up... 00:09:36.929 ************************************ 00:09:36.929 END TEST nvme_sgl 00:09:36.929 ************************************ 00:09:36.929 00:09:36.929 real 0m0.340s 00:09:36.929 user 0m0.182s 00:09:36.929 sys 0m0.112s 00:09:36.929 15:17:50 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:36.929 15:17:50 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:36.929 15:17:50 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:36.929 15:17:50 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:36.929 15:17:50 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:36.929 15:17:50 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.929 15:17:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:36.929 ************************************ 00:09:36.929 START TEST nvme_e2edp 00:09:36.929 ************************************ 00:09:36.929 15:17:50 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:37.188 NVMe Write/Read with End-to-End data protection test 00:09:37.188 Attached to 0000:00:11.0 00:09:37.188 Attached to 0000:00:13.0 00:09:37.188 Attached to 0000:00:10.0 00:09:37.188 Attached to 0000:00:12.0 00:09:37.188 Cleaning up... 00:09:37.188 ************************************ 00:09:37.188 END TEST nvme_e2edp 00:09:37.188 ************************************ 00:09:37.188 00:09:37.188 real 0m0.291s 00:09:37.188 user 0m0.108s 00:09:37.188 sys 0m0.137s 00:09:37.188 15:17:50 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:37.188 15:17:50 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:37.188 15:17:50 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:37.188 15:17:50 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:37.188 15:17:50 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:37.188 15:17:50 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.188 15:17:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:37.188 ************************************ 00:09:37.188 START TEST nvme_reserve 00:09:37.188 ************************************ 00:09:37.188 15:17:50 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:37.755 ===================================================== 00:09:37.755 NVMe Controller at PCI bus 0, device 17, function 0 00:09:37.755 ===================================================== 00:09:37.755 Reservations: Not Supported 00:09:37.755 ===================================================== 00:09:37.755 NVMe Controller at PCI bus 0, device 19, function 0 00:09:37.755 ===================================================== 00:09:37.755 Reservations: Not Supported 00:09:37.755 ===================================================== 00:09:37.755 NVMe Controller at PCI bus 0, device 16, function 0 00:09:37.755 ===================================================== 00:09:37.755 Reservations: Not Supported 00:09:37.755 ===================================================== 00:09:37.755 NVMe Controller at PCI bus 0, device 18, function 0 00:09:37.755 ===================================================== 00:09:37.755 Reservations: Not Supported 00:09:37.755 Reservation test passed 00:09:37.755 ************************************ 00:09:37.755 END TEST nvme_reserve 00:09:37.755 ************************************ 00:09:37.755 00:09:37.755 real 0m0.310s 00:09:37.755 user 0m0.116s 00:09:37.755 sys 0m0.136s 00:09:37.755 15:17:51 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:37.755 15:17:51 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:37.755 15:17:51 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:37.755 15:17:51 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:37.755 15:17:51 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:37.755 15:17:51 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.755 15:17:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:37.755 ************************************ 00:09:37.755 START TEST nvme_err_injection 00:09:37.755 ************************************ 00:09:37.755 15:17:51 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:38.015 NVMe Error Injection test 00:09:38.015 Attached to 0000:00:11.0 00:09:38.015 Attached to 0000:00:13.0 00:09:38.015 Attached to 0000:00:10.0 00:09:38.015 Attached to 0000:00:12.0 00:09:38.015 0000:00:11.0: get features failed as expected 00:09:38.015 0000:00:13.0: get features failed as expected 00:09:38.015 0000:00:10.0: get features failed as expected 00:09:38.015 0000:00:12.0: get features failed as expected 00:09:38.015 0000:00:10.0: get features successfully as expected 00:09:38.015 0000:00:12.0: get features successfully as expected 00:09:38.015 0000:00:11.0: get features successfully as expected 00:09:38.015 0000:00:13.0: get features successfully as expected 00:09:38.015 0000:00:11.0: read failed as expected 00:09:38.015 0000:00:13.0: read failed as expected 00:09:38.015 0000:00:10.0: read failed as expected 00:09:38.015 0000:00:12.0: read failed as expected 00:09:38.015 0000:00:11.0: read successfully as expected 00:09:38.015 0000:00:13.0: read successfully as expected 00:09:38.015 0000:00:10.0: read successfully as expected 00:09:38.015 0000:00:12.0: read successfully as expected 00:09:38.015 Cleaning up... 00:09:38.015 ************************************ 00:09:38.015 END TEST nvme_err_injection 00:09:38.015 ************************************ 00:09:38.015 00:09:38.015 real 0m0.312s 00:09:38.015 user 0m0.134s 00:09:38.015 sys 0m0.134s 00:09:38.015 15:17:51 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:38.015 15:17:51 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:38.015 15:17:51 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:38.015 15:17:51 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:38.015 15:17:51 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:09:38.015 15:17:51 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.015 15:17:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:38.015 ************************************ 00:09:38.015 START TEST nvme_overhead 00:09:38.015 ************************************ 00:09:38.015 15:17:51 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:39.394 Initializing NVMe Controllers 00:09:39.394 Attached to 0000:00:11.0 00:09:39.394 Attached to 0000:00:13.0 00:09:39.394 Attached to 0000:00:10.0 00:09:39.394 Attached to 0000:00:12.0 00:09:39.394 Initialization complete. Launching workers. 00:09:39.394 submit (in ns) avg, min, max = 17272.6, 13219.1, 78225.9 00:09:39.394 complete (in ns) avg, min, max = 12162.2, 8726.4, 194998.6 00:09:39.394 00:09:39.394 Submit histogram 00:09:39.394 ================ 00:09:39.394 Range in us Cumulative Count 00:09:39.394 13.207 - 13.265: 0.0115% ( 1) 00:09:39.394 13.265 - 13.324: 0.0230% ( 1) 00:09:39.394 13.382 - 13.440: 0.0461% ( 2) 00:09:39.394 13.556 - 13.615: 0.1728% ( 11) 00:09:39.394 13.615 - 13.673: 0.2995% ( 11) 00:09:39.394 13.673 - 13.731: 0.5759% ( 24) 00:09:39.394 13.731 - 13.789: 0.7717% ( 17) 00:09:39.394 13.789 - 13.847: 1.0942% ( 28) 00:09:39.394 13.847 - 13.905: 1.2324% ( 12) 00:09:39.394 13.905 - 13.964: 1.5089% ( 24) 00:09:39.394 13.964 - 14.022: 2.2921% ( 68) 00:09:39.394 14.022 - 14.080: 3.6628% ( 119) 00:09:39.394 14.080 - 14.138: 5.5863% ( 167) 00:09:39.394 14.138 - 14.196: 8.0857% ( 217) 00:09:39.394 14.196 - 14.255: 10.5851% ( 217) 00:09:39.394 14.255 - 14.313: 12.6814% ( 182) 00:09:39.394 14.313 - 14.371: 14.8929% ( 192) 00:09:39.394 14.371 - 14.429: 18.0719% ( 276) 00:09:39.394 14.429 - 14.487: 22.2184% ( 360) 00:09:39.394 14.487 - 14.545: 26.6528% ( 385) 00:09:39.394 14.545 - 14.604: 30.6842% ( 350) 00:09:39.394 14.604 - 14.662: 34.1050% ( 297) 00:09:39.394 14.662 - 14.720: 36.7196% ( 227) 00:09:39.394 14.720 - 14.778: 38.6201% ( 165) 00:09:39.394 14.778 - 14.836: 40.2903% ( 145) 00:09:39.394 14.836 - 14.895: 41.8798% ( 138) 00:09:39.394 14.895 - 15.011: 44.7938% ( 253) 00:09:39.394 15.011 - 15.127: 46.4755% ( 146) 00:09:39.394 15.127 - 15.244: 48.4566% ( 172) 00:09:39.394 15.244 - 15.360: 51.3246% ( 249) 00:09:39.394 15.360 - 15.476: 53.7088% ( 207) 00:09:39.394 15.476 - 15.593: 55.7590% ( 178) 00:09:39.394 15.593 - 15.709: 56.9800% ( 106) 00:09:39.394 15.709 - 15.825: 58.0166% ( 90) 00:09:39.394 15.825 - 15.942: 58.6616% ( 56) 00:09:39.394 15.942 - 16.058: 59.4103% ( 65) 00:09:39.394 16.058 - 16.175: 60.0898% ( 59) 00:09:39.394 16.175 - 16.291: 60.8155% ( 63) 00:09:39.394 16.291 - 16.407: 61.2992% ( 42) 00:09:39.394 16.407 - 16.524: 61.5872% ( 25) 00:09:39.394 16.524 - 16.640: 61.8060% ( 19) 00:09:39.394 16.640 - 16.756: 62.0018% ( 17) 00:09:39.394 16.756 - 16.873: 62.0940% ( 8) 00:09:39.394 16.873 - 16.989: 62.1401% ( 4) 00:09:39.394 16.989 - 17.105: 62.2783% ( 12) 00:09:39.394 17.105 - 17.222: 62.3243% ( 4) 00:09:39.394 17.222 - 17.338: 62.3704% ( 4) 00:09:39.394 17.338 - 17.455: 62.4626% ( 8) 00:09:39.394 17.455 - 17.571: 62.5202% ( 5) 00:09:39.394 17.571 - 17.687: 62.5893% ( 6) 00:09:39.394 17.687 - 17.804: 62.8772% ( 25) 00:09:39.394 17.804 - 17.920: 64.9044% ( 176) 00:09:39.394 17.920 - 18.036: 70.5943% ( 494) 00:09:39.394 18.036 - 18.153: 76.2727% ( 493) 00:09:39.394 18.153 - 18.269: 79.4517% ( 276) 00:09:39.395 18.269 - 18.385: 80.7994% ( 117) 00:09:39.395 18.385 - 18.502: 81.8129% ( 88) 00:09:39.395 18.502 - 18.618: 82.3543% ( 47) 00:09:39.395 18.618 - 18.735: 82.7689% ( 36) 00:09:39.395 18.735 - 18.851: 83.3794% ( 53) 00:09:39.395 18.851 - 18.967: 84.1281% ( 65) 00:09:39.395 18.967 - 19.084: 84.9689% ( 73) 00:09:39.395 19.084 - 19.200: 85.7061% ( 64) 00:09:39.395 19.200 - 19.316: 86.3741% ( 58) 00:09:39.395 19.316 - 19.433: 86.8809% ( 44) 00:09:39.395 19.433 - 19.549: 87.1343% ( 22) 00:09:39.395 19.549 - 19.665: 87.3647% ( 20) 00:09:39.395 19.665 - 19.782: 87.4683% ( 9) 00:09:39.395 19.782 - 19.898: 87.6181% ( 13) 00:09:39.395 19.898 - 20.015: 87.7793% ( 14) 00:09:39.395 20.015 - 20.131: 87.9290% ( 13) 00:09:39.395 20.131 - 20.247: 88.2055% ( 24) 00:09:39.395 20.247 - 20.364: 88.2861% ( 7) 00:09:39.395 20.364 - 20.480: 88.4128% ( 11) 00:09:39.395 20.480 - 20.596: 88.5510% ( 12) 00:09:39.395 20.596 - 20.713: 88.6547% ( 9) 00:09:39.395 20.713 - 20.829: 88.7353% ( 7) 00:09:39.395 20.829 - 20.945: 88.7929% ( 5) 00:09:39.395 20.945 - 21.062: 88.8850% ( 8) 00:09:39.395 21.062 - 21.178: 89.0233% ( 12) 00:09:39.395 21.178 - 21.295: 89.1269% ( 9) 00:09:39.395 21.295 - 21.411: 89.2076% ( 7) 00:09:39.395 21.411 - 21.527: 89.2767% ( 6) 00:09:39.395 21.527 - 21.644: 89.4149% ( 12) 00:09:39.395 21.760 - 21.876: 89.4955% ( 7) 00:09:39.395 21.876 - 21.993: 89.5531% ( 5) 00:09:39.395 21.993 - 22.109: 89.5992% ( 4) 00:09:39.395 22.109 - 22.225: 89.6568% ( 5) 00:09:39.395 22.225 - 22.342: 89.6798% ( 2) 00:09:39.395 22.342 - 22.458: 89.7604% ( 7) 00:09:39.395 22.458 - 22.575: 89.8180% ( 5) 00:09:39.395 22.575 - 22.691: 89.9217% ( 9) 00:09:39.395 22.691 - 22.807: 89.9677% ( 4) 00:09:39.395 22.807 - 22.924: 90.0484% ( 7) 00:09:39.395 22.924 - 23.040: 90.1175% ( 6) 00:09:39.395 23.040 - 23.156: 90.1405% ( 2) 00:09:39.395 23.156 - 23.273: 90.1981% ( 5) 00:09:39.395 23.273 - 23.389: 90.2787% ( 7) 00:09:39.395 23.389 - 23.505: 90.3363% ( 5) 00:09:39.395 23.505 - 23.622: 90.3594% ( 2) 00:09:39.395 23.622 - 23.738: 90.4400% ( 7) 00:09:39.395 23.738 - 23.855: 90.5206% ( 7) 00:09:39.395 23.855 - 23.971: 90.5897% ( 6) 00:09:39.395 23.971 - 24.087: 90.6588% ( 6) 00:09:39.395 24.087 - 24.204: 90.6934% ( 3) 00:09:39.395 24.204 - 24.320: 90.7279% ( 3) 00:09:39.395 24.320 - 24.436: 90.8316% ( 9) 00:09:39.395 24.436 - 24.553: 90.9583% ( 11) 00:09:39.395 24.553 - 24.669: 91.1080% ( 13) 00:09:39.395 24.669 - 24.785: 91.1541% ( 4) 00:09:39.395 24.785 - 24.902: 91.2117% ( 5) 00:09:39.395 24.902 - 25.018: 91.3269% ( 10) 00:09:39.395 25.018 - 25.135: 91.4421% ( 10) 00:09:39.395 25.135 - 25.251: 91.5342% ( 8) 00:09:39.395 25.251 - 25.367: 91.6033% ( 6) 00:09:39.395 25.367 - 25.484: 91.6379% ( 3) 00:09:39.395 25.484 - 25.600: 91.6839% ( 4) 00:09:39.395 25.600 - 25.716: 91.7185% ( 3) 00:09:39.395 25.716 - 25.833: 91.8106% ( 8) 00:09:39.395 25.833 - 25.949: 91.8567% ( 4) 00:09:39.395 25.949 - 26.065: 91.9719% ( 10) 00:09:39.395 26.065 - 26.182: 92.0295% ( 5) 00:09:39.395 26.182 - 26.298: 92.0525% ( 2) 00:09:39.395 26.298 - 26.415: 92.1331% ( 7) 00:09:39.395 26.415 - 26.531: 92.2023% ( 6) 00:09:39.395 26.531 - 26.647: 92.2829% ( 7) 00:09:39.395 26.647 - 26.764: 92.3059% ( 2) 00:09:39.395 26.764 - 26.880: 92.3750% ( 6) 00:09:39.395 26.880 - 26.996: 92.4211% ( 4) 00:09:39.395 26.996 - 27.113: 92.4441% ( 2) 00:09:39.395 27.113 - 27.229: 92.5132% ( 6) 00:09:39.395 27.229 - 27.345: 92.5363% ( 2) 00:09:39.395 27.345 - 27.462: 92.6054% ( 6) 00:09:39.395 27.462 - 27.578: 92.7206% ( 10) 00:09:39.395 27.578 - 27.695: 92.7782% ( 5) 00:09:39.395 27.695 - 27.811: 92.8012% ( 2) 00:09:39.395 27.811 - 27.927: 92.8703% ( 6) 00:09:39.395 27.927 - 28.044: 92.9279% ( 5) 00:09:39.395 28.044 - 28.160: 92.9394% ( 1) 00:09:39.395 28.160 - 28.276: 93.0316% ( 8) 00:09:39.395 28.276 - 28.393: 93.1007% ( 6) 00:09:39.395 28.393 - 28.509: 93.2389% ( 12) 00:09:39.395 28.509 - 28.625: 93.4232% ( 16) 00:09:39.395 28.625 - 28.742: 93.6996% ( 24) 00:09:39.395 28.742 - 28.858: 94.1603% ( 40) 00:09:39.395 28.858 - 28.975: 94.6326% ( 41) 00:09:39.395 28.975 - 29.091: 95.2200% ( 51) 00:09:39.395 29.091 - 29.207: 95.7613% ( 47) 00:09:39.395 29.207 - 29.324: 96.2106% ( 39) 00:09:39.395 29.324 - 29.440: 96.5676% ( 31) 00:09:39.395 29.440 - 29.556: 96.8095% ( 21) 00:09:39.395 29.556 - 29.673: 97.0859% ( 24) 00:09:39.395 29.673 - 29.789: 97.3048% ( 19) 00:09:39.395 29.789 - 30.022: 97.6849% ( 33) 00:09:39.395 30.022 - 30.255: 98.0304% ( 30) 00:09:39.395 30.255 - 30.487: 98.2723% ( 21) 00:09:39.395 30.487 - 30.720: 98.4681% ( 17) 00:09:39.395 30.720 - 30.953: 98.6293% ( 14) 00:09:39.395 30.953 - 31.185: 98.7330% ( 9) 00:09:39.395 31.185 - 31.418: 98.7906% ( 5) 00:09:39.395 31.418 - 31.651: 98.8367% ( 4) 00:09:39.395 31.651 - 31.884: 98.8827% ( 4) 00:09:39.395 31.884 - 32.116: 98.9173% ( 3) 00:09:39.395 32.116 - 32.349: 98.9749% ( 5) 00:09:39.395 32.349 - 32.582: 98.9979% ( 2) 00:09:39.395 32.582 - 32.815: 99.0325% ( 3) 00:09:39.395 33.047 - 33.280: 99.0670% ( 3) 00:09:39.395 33.280 - 33.513: 99.1131% ( 4) 00:09:39.395 33.513 - 33.745: 99.1246% ( 1) 00:09:39.395 33.745 - 33.978: 99.1477% ( 2) 00:09:39.395 33.978 - 34.211: 99.1592% ( 1) 00:09:39.395 34.211 - 34.444: 99.1937% ( 3) 00:09:39.395 34.444 - 34.676: 99.2053% ( 1) 00:09:39.395 34.909 - 35.142: 99.2744% ( 6) 00:09:39.395 35.142 - 35.375: 99.3435% ( 6) 00:09:39.395 35.375 - 35.607: 99.3550% ( 1) 00:09:39.395 35.607 - 35.840: 99.3665% ( 1) 00:09:39.395 35.840 - 36.073: 99.3780% ( 1) 00:09:39.395 36.073 - 36.305: 99.4241% ( 4) 00:09:39.395 36.305 - 36.538: 99.4702% ( 4) 00:09:39.395 36.538 - 36.771: 99.4817% ( 1) 00:09:39.395 36.771 - 37.004: 99.5278% ( 4) 00:09:39.395 37.004 - 37.236: 99.5508% ( 2) 00:09:39.395 37.236 - 37.469: 99.5853% ( 3) 00:09:39.395 37.469 - 37.702: 99.6199% ( 3) 00:09:39.395 38.167 - 38.400: 99.6545% ( 3) 00:09:39.395 38.400 - 38.633: 99.6660% ( 1) 00:09:39.395 39.098 - 39.331: 99.6890% ( 2) 00:09:39.395 39.331 - 39.564: 99.7005% ( 1) 00:09:39.395 39.564 - 39.796: 99.7120% ( 1) 00:09:39.395 39.796 - 40.029: 99.7581% ( 4) 00:09:39.395 40.495 - 40.727: 99.7696% ( 1) 00:09:39.395 43.055 - 43.287: 99.7812% ( 1) 00:09:39.395 44.451 - 44.684: 99.7927% ( 1) 00:09:39.395 44.916 - 45.149: 99.8042% ( 1) 00:09:39.395 45.149 - 45.382: 99.8272% ( 2) 00:09:39.395 45.382 - 45.615: 99.8387% ( 1) 00:09:39.395 45.615 - 45.847: 99.8503% ( 1) 00:09:39.395 45.847 - 46.080: 99.8618% ( 1) 00:09:39.395 49.571 - 49.804: 99.8848% ( 2) 00:09:39.395 49.804 - 50.036: 99.8963% ( 1) 00:09:39.395 50.036 - 50.269: 99.9079% ( 1) 00:09:39.395 51.898 - 52.131: 99.9194% ( 1) 00:09:39.395 53.295 - 53.527: 99.9309% ( 1) 00:09:39.395 53.527 - 53.760: 99.9424% ( 1) 00:09:39.395 54.225 - 54.458: 99.9539% ( 1) 00:09:39.395 54.691 - 54.924: 99.9654% ( 1) 00:09:39.395 55.622 - 55.855: 99.9770% ( 1) 00:09:39.395 56.553 - 56.785: 99.9885% ( 1) 00:09:39.395 78.196 - 78.662: 100.0000% ( 1) 00:09:39.395 00:09:39.395 Complete histogram 00:09:39.395 ================== 00:09:39.395 Range in us Cumulative Count 00:09:39.395 8.669 - 8.727: 0.0115% ( 1) 00:09:39.395 8.727 - 8.785: 0.0691% ( 5) 00:09:39.395 8.785 - 8.844: 0.1728% ( 9) 00:09:39.395 8.844 - 8.902: 0.3801% ( 18) 00:09:39.395 8.902 - 8.960: 0.5644% ( 16) 00:09:39.395 8.960 - 9.018: 0.7832% ( 19) 00:09:39.395 9.018 - 9.076: 1.4052% ( 54) 00:09:39.395 9.076 - 9.135: 2.4073% ( 87) 00:09:39.395 9.135 - 9.193: 3.9046% ( 130) 00:09:39.395 9.193 - 9.251: 5.6093% ( 148) 00:09:39.395 9.251 - 9.309: 7.5674% ( 170) 00:09:39.395 9.309 - 9.367: 10.8731% ( 287) 00:09:39.395 9.367 - 9.425: 15.3075% ( 385) 00:09:39.395 9.425 - 9.484: 20.8247% ( 479) 00:09:39.395 9.484 - 9.542: 26.0078% ( 450) 00:09:39.395 9.542 - 9.600: 30.6611% ( 404) 00:09:39.395 9.600 - 9.658: 34.1626% ( 304) 00:09:39.395 9.658 - 9.716: 36.9846% ( 245) 00:09:39.395 9.716 - 9.775: 39.4494% ( 214) 00:09:39.395 9.775 - 9.833: 41.2347% ( 155) 00:09:39.395 9.833 - 9.891: 42.6169% ( 120) 00:09:39.395 9.891 - 9.949: 44.0452% ( 124) 00:09:39.395 9.949 - 10.007: 45.6346% ( 138) 00:09:39.395 10.007 - 10.065: 47.3739% ( 151) 00:09:39.395 10.065 - 10.124: 48.9864% ( 140) 00:09:39.395 10.124 - 10.182: 50.3686% ( 120) 00:09:39.395 10.182 - 10.240: 51.6356% ( 110) 00:09:39.395 10.240 - 10.298: 52.5916% ( 83) 00:09:39.395 10.298 - 10.356: 53.3518% ( 66) 00:09:39.395 10.356 - 10.415: 54.1004% ( 65) 00:09:39.395 10.415 - 10.473: 54.5727% ( 41) 00:09:39.395 10.473 - 10.531: 54.9528% ( 33) 00:09:39.395 10.531 - 10.589: 55.2407% ( 25) 00:09:39.395 10.589 - 10.647: 55.4135% ( 15) 00:09:39.395 10.647 - 10.705: 55.6323% ( 19) 00:09:39.396 10.705 - 10.764: 55.7475% ( 10) 00:09:39.396 10.764 - 10.822: 55.9088% ( 14) 00:09:39.396 10.822 - 10.880: 56.0931% ( 16) 00:09:39.396 10.880 - 10.938: 56.3004% ( 18) 00:09:39.396 10.938 - 10.996: 56.6344% ( 29) 00:09:39.396 10.996 - 11.055: 56.9454% ( 27) 00:09:39.396 11.055 - 11.113: 57.3025% ( 31) 00:09:39.396 11.113 - 11.171: 57.5674% ( 23) 00:09:39.396 11.171 - 11.229: 57.9475% ( 33) 00:09:39.396 11.229 - 11.287: 58.2700% ( 28) 00:09:39.396 11.287 - 11.345: 58.4082% ( 12) 00:09:39.396 11.345 - 11.404: 58.5925% ( 16) 00:09:39.396 11.404 - 11.462: 58.6846% ( 8) 00:09:39.396 11.462 - 11.520: 58.9496% ( 23) 00:09:39.396 11.520 - 11.578: 59.1569% ( 18) 00:09:39.396 11.578 - 11.636: 59.3757% ( 19) 00:09:39.396 11.636 - 11.695: 59.5715% ( 17) 00:09:39.396 11.695 - 11.753: 59.7558% ( 16) 00:09:39.396 11.753 - 11.811: 59.9977% ( 21) 00:09:39.396 11.811 - 11.869: 60.2050% ( 18) 00:09:39.396 11.869 - 11.927: 60.6312% ( 37) 00:09:39.396 11.927 - 11.985: 61.8291% ( 104) 00:09:39.396 11.985 - 12.044: 64.4552% ( 228) 00:09:39.396 12.044 - 12.102: 68.9127% ( 387) 00:09:39.396 12.102 - 12.160: 73.4969% ( 398) 00:09:39.396 12.160 - 12.218: 77.0445% ( 308) 00:09:39.396 12.218 - 12.276: 79.3481% ( 200) 00:09:39.396 12.276 - 12.335: 80.9952% ( 143) 00:09:39.396 12.335 - 12.393: 81.6747% ( 59) 00:09:39.396 12.393 - 12.451: 82.1355% ( 40) 00:09:39.396 12.451 - 12.509: 82.4119% ( 24) 00:09:39.396 12.509 - 12.567: 82.5386% ( 11) 00:09:39.396 12.567 - 12.625: 82.6307% ( 8) 00:09:39.396 12.625 - 12.684: 82.7344% ( 9) 00:09:39.396 12.684 - 12.742: 82.8265% ( 8) 00:09:39.396 12.742 - 12.800: 82.9532% ( 11) 00:09:39.396 12.800 - 12.858: 83.0684% ( 10) 00:09:39.396 12.858 - 12.916: 83.2988% ( 20) 00:09:39.396 12.916 - 12.975: 83.5752% ( 24) 00:09:39.396 12.975 - 13.033: 84.0359% ( 40) 00:09:39.396 13.033 - 13.091: 84.3584% ( 28) 00:09:39.396 13.091 - 13.149: 84.7385% ( 33) 00:09:39.396 13.149 - 13.207: 85.0610% ( 28) 00:09:39.396 13.207 - 13.265: 85.3720% ( 27) 00:09:39.396 13.265 - 13.324: 85.6139% ( 21) 00:09:39.396 13.324 - 13.382: 85.8673% ( 22) 00:09:39.396 13.382 - 13.440: 86.0631% ( 17) 00:09:39.396 13.440 - 13.498: 86.2244% ( 14) 00:09:39.396 13.498 - 13.556: 86.3050% ( 7) 00:09:39.396 13.556 - 13.615: 86.3741% ( 6) 00:09:39.396 13.615 - 13.673: 86.3971% ( 2) 00:09:39.396 13.673 - 13.731: 86.4317% ( 3) 00:09:39.396 13.731 - 13.789: 86.4778% ( 4) 00:09:39.396 13.789 - 13.847: 86.5008% ( 2) 00:09:39.396 13.847 - 13.905: 86.5584% ( 5) 00:09:39.396 13.905 - 13.964: 86.6160% ( 5) 00:09:39.396 13.964 - 14.022: 86.6621% ( 4) 00:09:39.396 14.022 - 14.080: 86.7312% ( 6) 00:09:39.396 14.080 - 14.138: 86.7888% ( 5) 00:09:39.396 14.138 - 14.196: 86.8003% ( 1) 00:09:39.396 14.196 - 14.255: 86.8233% ( 2) 00:09:39.396 14.255 - 14.313: 86.8694% ( 4) 00:09:39.396 14.313 - 14.371: 86.9039% ( 3) 00:09:39.396 14.371 - 14.429: 86.9270% ( 2) 00:09:39.396 14.429 - 14.487: 86.9615% ( 3) 00:09:39.396 14.487 - 14.545: 86.9961% ( 3) 00:09:39.396 14.545 - 14.604: 87.0191% ( 2) 00:09:39.396 14.604 - 14.662: 87.0997% ( 7) 00:09:39.396 14.662 - 14.720: 87.1113% ( 1) 00:09:39.396 14.720 - 14.778: 87.1573% ( 4) 00:09:39.396 14.778 - 14.836: 87.2264% ( 6) 00:09:39.396 14.836 - 14.895: 87.3301% ( 9) 00:09:39.396 14.895 - 15.011: 87.5144% ( 16) 00:09:39.396 15.011 - 15.127: 87.7793% ( 23) 00:09:39.396 15.127 - 15.244: 87.9636% ( 16) 00:09:39.396 15.244 - 15.360: 88.1479% ( 16) 00:09:39.396 15.360 - 15.476: 88.2746% ( 11) 00:09:39.396 15.476 - 15.593: 88.3552% ( 7) 00:09:39.396 15.593 - 15.709: 88.4128% ( 5) 00:09:39.396 15.709 - 15.825: 88.5050% ( 8) 00:09:39.396 15.825 - 15.942: 88.5625% ( 5) 00:09:39.396 15.942 - 16.058: 88.6777% ( 10) 00:09:39.396 16.058 - 16.175: 88.7584% ( 7) 00:09:39.396 16.175 - 16.291: 88.8620% ( 9) 00:09:39.396 16.291 - 16.407: 88.9311% ( 6) 00:09:39.396 16.407 - 16.524: 88.9542% ( 2) 00:09:39.396 16.524 - 16.640: 89.0578% ( 9) 00:09:39.396 16.640 - 16.756: 89.1039% ( 4) 00:09:39.396 16.756 - 16.873: 89.2076% ( 9) 00:09:39.396 16.873 - 16.989: 89.3458% ( 12) 00:09:39.396 16.989 - 17.105: 89.3918% ( 4) 00:09:39.396 17.105 - 17.222: 89.4610% ( 6) 00:09:39.396 17.222 - 17.338: 89.5531% ( 8) 00:09:39.396 17.338 - 17.455: 89.6452% ( 8) 00:09:39.396 17.455 - 17.571: 89.6913% ( 4) 00:09:39.396 17.571 - 17.687: 89.7374% ( 4) 00:09:39.396 17.687 - 17.804: 89.7604% ( 2) 00:09:39.396 17.804 - 17.920: 89.8295% ( 6) 00:09:39.396 17.920 - 18.036: 89.8871% ( 5) 00:09:39.396 18.036 - 18.153: 89.9332% ( 4) 00:09:39.396 18.153 - 18.269: 90.0369% ( 9) 00:09:39.396 18.269 - 18.385: 90.0944% ( 5) 00:09:39.396 18.385 - 18.502: 90.1290% ( 3) 00:09:39.396 18.502 - 18.618: 90.1751% ( 4) 00:09:39.396 18.618 - 18.735: 90.2442% ( 6) 00:09:39.396 18.735 - 18.851: 90.2903% ( 4) 00:09:39.396 18.851 - 18.967: 90.3248% ( 3) 00:09:39.396 18.967 - 19.084: 90.3709% ( 4) 00:09:39.396 19.084 - 19.200: 90.4054% ( 3) 00:09:39.396 19.200 - 19.316: 90.4400% ( 3) 00:09:39.396 19.316 - 19.433: 90.5091% ( 6) 00:09:39.396 19.433 - 19.549: 90.5782% ( 6) 00:09:39.396 19.549 - 19.665: 90.6012% ( 2) 00:09:39.396 19.665 - 19.782: 90.6358% ( 3) 00:09:39.396 19.782 - 19.898: 90.6819% ( 4) 00:09:39.396 19.898 - 20.015: 90.7625% ( 7) 00:09:39.396 20.015 - 20.131: 90.7971% ( 3) 00:09:39.396 20.131 - 20.247: 90.8316% ( 3) 00:09:39.396 20.247 - 20.364: 90.8546% ( 2) 00:09:39.396 20.364 - 20.480: 90.9007% ( 4) 00:09:39.396 20.480 - 20.596: 90.9468% ( 4) 00:09:39.396 20.596 - 20.713: 90.9813% ( 3) 00:09:39.396 20.713 - 20.829: 91.0044% ( 2) 00:09:39.396 20.829 - 20.945: 91.0620% ( 5) 00:09:39.396 20.945 - 21.062: 91.0965% ( 3) 00:09:39.396 21.062 - 21.178: 91.1311% ( 3) 00:09:39.396 21.178 - 21.295: 91.1887% ( 5) 00:09:39.396 21.295 - 21.411: 91.2808% ( 8) 00:09:39.396 21.411 - 21.527: 91.3269% ( 4) 00:09:39.396 21.527 - 21.644: 91.3730% ( 4) 00:09:39.396 21.644 - 21.760: 91.3960% ( 2) 00:09:39.396 21.760 - 21.876: 91.4766% ( 7) 00:09:39.396 21.876 - 21.993: 91.5112% ( 3) 00:09:39.396 21.993 - 22.109: 91.5227% ( 1) 00:09:39.396 22.109 - 22.225: 91.5688% ( 4) 00:09:39.396 22.225 - 22.342: 91.6264% ( 5) 00:09:39.396 22.342 - 22.458: 91.6494% ( 2) 00:09:39.396 22.458 - 22.575: 91.6955% ( 4) 00:09:39.396 22.575 - 22.691: 91.7646% ( 6) 00:09:39.396 22.691 - 22.807: 91.7991% ( 3) 00:09:39.396 22.807 - 22.924: 91.8913% ( 8) 00:09:39.396 22.924 - 23.040: 91.9143% ( 2) 00:09:39.396 23.040 - 23.156: 91.9258% ( 1) 00:09:39.396 23.156 - 23.273: 91.9489% ( 2) 00:09:39.396 23.273 - 23.389: 92.0871% ( 12) 00:09:39.396 23.389 - 23.505: 92.2829% ( 17) 00:09:39.396 23.505 - 23.622: 92.5363% ( 22) 00:09:39.396 23.622 - 23.738: 92.8933% ( 31) 00:09:39.396 23.738 - 23.855: 93.5384% ( 56) 00:09:39.396 23.855 - 23.971: 94.2064% ( 58) 00:09:39.396 23.971 - 24.087: 94.9205% ( 62) 00:09:39.396 24.087 - 24.204: 95.5540% ( 55) 00:09:39.396 24.204 - 24.320: 96.2106% ( 57) 00:09:39.396 24.320 - 24.436: 96.6367% ( 37) 00:09:39.396 24.436 - 24.553: 97.0744% ( 38) 00:09:39.396 24.553 - 24.669: 97.2817% ( 18) 00:09:39.396 24.669 - 24.785: 97.6158% ( 29) 00:09:39.396 24.785 - 24.902: 97.7885% ( 15) 00:09:39.396 24.902 - 25.018: 97.9728% ( 16) 00:09:39.396 25.018 - 25.135: 98.1571% ( 16) 00:09:39.396 25.135 - 25.251: 98.2838% ( 11) 00:09:39.396 25.251 - 25.367: 98.3414% ( 5) 00:09:39.396 25.367 - 25.484: 98.4220% ( 7) 00:09:39.396 25.484 - 25.600: 98.5142% ( 8) 00:09:39.396 25.600 - 25.716: 98.6063% ( 8) 00:09:39.396 25.716 - 25.833: 98.6754% ( 6) 00:09:39.396 25.833 - 25.949: 98.7445% ( 6) 00:09:39.396 25.949 - 26.065: 98.8597% ( 10) 00:09:39.396 26.065 - 26.182: 98.8943% ( 3) 00:09:39.396 26.182 - 26.298: 98.9058% ( 1) 00:09:39.396 26.298 - 26.415: 98.9519% ( 4) 00:09:39.396 26.415 - 26.531: 98.9749% ( 2) 00:09:39.396 26.531 - 26.647: 99.0094% ( 3) 00:09:39.396 26.647 - 26.764: 99.0325% ( 2) 00:09:39.396 26.880 - 26.996: 99.0440% ( 1) 00:09:39.396 27.113 - 27.229: 99.0670% ( 2) 00:09:39.396 27.229 - 27.345: 99.1131% ( 4) 00:09:39.396 27.462 - 27.578: 99.1246% ( 1) 00:09:39.396 27.578 - 27.695: 99.1361% ( 1) 00:09:39.396 27.695 - 27.811: 99.1592% ( 2) 00:09:39.396 28.509 - 28.625: 99.1707% ( 1) 00:09:39.396 28.742 - 28.858: 99.1822% ( 1) 00:09:39.396 28.975 - 29.091: 99.1937% ( 1) 00:09:39.396 29.556 - 29.673: 99.2168% ( 2) 00:09:39.396 29.673 - 29.789: 99.2283% ( 1) 00:09:39.396 29.789 - 30.022: 99.2513% ( 2) 00:09:39.396 30.022 - 30.255: 99.3089% ( 5) 00:09:39.396 30.255 - 30.487: 99.3550% ( 4) 00:09:39.396 30.487 - 30.720: 99.3780% ( 2) 00:09:39.396 30.720 - 30.953: 99.4241% ( 4) 00:09:39.396 30.953 - 31.185: 99.4471% ( 2) 00:09:39.396 31.185 - 31.418: 99.4932% ( 4) 00:09:39.397 31.418 - 31.651: 99.5278% ( 3) 00:09:39.397 31.651 - 31.884: 99.5853% ( 5) 00:09:39.397 31.884 - 32.116: 99.5969% ( 1) 00:09:39.397 32.116 - 32.349: 99.6084% ( 1) 00:09:39.397 32.349 - 32.582: 99.6199% ( 1) 00:09:39.397 32.582 - 32.815: 99.6314% ( 1) 00:09:39.397 32.815 - 33.047: 99.6429% ( 1) 00:09:39.397 33.047 - 33.280: 99.6545% ( 1) 00:09:39.397 33.280 - 33.513: 99.6775% ( 2) 00:09:39.397 33.513 - 33.745: 99.7120% ( 3) 00:09:39.397 34.676 - 34.909: 99.7466% ( 3) 00:09:39.397 34.909 - 35.142: 99.7581% ( 1) 00:09:39.397 35.142 - 35.375: 99.7696% ( 1) 00:09:39.397 36.305 - 36.538: 99.7812% ( 1) 00:09:39.397 37.236 - 37.469: 99.7927% ( 1) 00:09:39.397 37.702 - 37.935: 99.8042% ( 1) 00:09:39.397 37.935 - 38.167: 99.8157% ( 1) 00:09:39.397 38.633 - 38.865: 99.8272% ( 1) 00:09:39.397 39.098 - 39.331: 99.8387% ( 1) 00:09:39.397 40.029 - 40.262: 99.8618% ( 2) 00:09:39.397 40.960 - 41.193: 99.8733% ( 1) 00:09:39.397 41.658 - 41.891: 99.8848% ( 1) 00:09:39.397 45.615 - 45.847: 99.8963% ( 1) 00:09:39.397 47.476 - 47.709: 99.9079% ( 1) 00:09:39.397 48.407 - 48.640: 99.9194% ( 1) 00:09:39.397 58.647 - 58.880: 99.9309% ( 1) 00:09:39.397 74.938 - 75.404: 99.9424% ( 1) 00:09:39.397 78.662 - 79.127: 99.9539% ( 1) 00:09:39.397 79.127 - 79.593: 99.9654% ( 1) 00:09:39.397 83.782 - 84.247: 99.9770% ( 1) 00:09:39.397 143.360 - 144.291: 99.9885% ( 1) 00:09:39.397 194.560 - 195.491: 100.0000% ( 1) 00:09:39.397 00:09:39.397 ************************************ 00:09:39.397 END TEST nvme_overhead 00:09:39.397 ************************************ 00:09:39.397 00:09:39.397 real 0m1.286s 00:09:39.397 user 0m1.106s 00:09:39.397 sys 0m0.132s 00:09:39.397 15:17:52 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:39.397 15:17:52 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:39.397 15:17:52 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:39.397 15:17:52 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:39.397 15:17:52 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:39.397 15:17:52 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:39.397 15:17:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:39.397 ************************************ 00:09:39.397 START TEST nvme_arbitration 00:09:39.397 ************************************ 00:09:39.397 15:17:52 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:42.682 Initializing NVMe Controllers 00:09:42.682 Attached to 0000:00:11.0 00:09:42.682 Attached to 0000:00:13.0 00:09:42.682 Attached to 0000:00:10.0 00:09:42.682 Attached to 0000:00:12.0 00:09:42.682 Associating QEMU NVMe Ctrl (12341 ) with lcore 0 00:09:42.682 Associating QEMU NVMe Ctrl (12343 ) with lcore 1 00:09:42.682 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:09:42.682 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:42.682 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:42.682 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:42.682 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:42.682 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:42.682 Initialization complete. Launching workers. 00:09:42.682 Starting thread on core 1 with urgent priority queue 00:09:42.682 Starting thread on core 2 with urgent priority queue 00:09:42.682 Starting thread on core 3 with urgent priority queue 00:09:42.682 Starting thread on core 0 with urgent priority queue 00:09:42.682 QEMU NVMe Ctrl (12341 ) core 0: 661.33 IO/s 151.21 secs/100000 ios 00:09:42.682 QEMU NVMe Ctrl (12342 ) core 0: 661.33 IO/s 151.21 secs/100000 ios 00:09:42.682 QEMU NVMe Ctrl (12343 ) core 1: 661.33 IO/s 151.21 secs/100000 ios 00:09:42.682 QEMU NVMe Ctrl (12342 ) core 1: 661.33 IO/s 151.21 secs/100000 ios 00:09:42.682 QEMU NVMe Ctrl (12340 ) core 2: 661.33 IO/s 151.21 secs/100000 ios 00:09:42.682 QEMU NVMe Ctrl (12342 ) core 3: 618.67 IO/s 161.64 secs/100000 ios 00:09:42.682 ======================================================== 00:09:42.682 00:09:42.682 ************************************ 00:09:42.682 END TEST nvme_arbitration 00:09:42.682 ************************************ 00:09:42.682 00:09:42.682 real 0m3.386s 00:09:42.682 user 0m9.397s 00:09:42.682 sys 0m0.127s 00:09:42.682 15:17:56 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:42.682 15:17:56 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:42.682 15:17:56 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:42.682 15:17:56 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:42.682 15:17:56 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:42.682 15:17:56 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:42.682 15:17:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:42.682 ************************************ 00:09:42.682 START TEST nvme_single_aen 00:09:42.682 ************************************ 00:09:42.682 15:17:56 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:43.247 Asynchronous Event Request test 00:09:43.247 Attached to 0000:00:11.0 00:09:43.247 Attached to 0000:00:13.0 00:09:43.247 Attached to 0000:00:10.0 00:09:43.247 Attached to 0000:00:12.0 00:09:43.247 Reset controller to setup AER completions for this process 00:09:43.247 Registering asynchronous event callbacks... 00:09:43.247 Getting orig temperature thresholds of all controllers 00:09:43.247 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.247 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.247 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.247 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.247 Setting all controllers temperature threshold low to trigger AER 00:09:43.247 Waiting for all controllers temperature threshold to be set lower 00:09:43.247 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.247 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:43.247 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.247 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:43.247 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.247 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:43.247 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.247 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:43.247 Waiting for all controllers to trigger AER and reset threshold 00:09:43.247 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.247 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.247 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.247 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.247 Cleaning up... 00:09:43.247 ************************************ 00:09:43.247 END TEST nvme_single_aen 00:09:43.247 ************************************ 00:09:43.247 00:09:43.247 real 0m0.288s 00:09:43.247 user 0m0.097s 00:09:43.247 sys 0m0.145s 00:09:43.247 15:17:56 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:43.247 15:17:56 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:43.247 15:17:56 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:43.247 15:17:56 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:43.247 15:17:56 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:43.247 15:17:56 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.247 15:17:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:43.247 ************************************ 00:09:43.247 START TEST nvme_doorbell_aers 00:09:43.247 ************************************ 00:09:43.247 15:17:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:09:43.247 15:17:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:43.247 15:17:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:43.247 15:17:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:43.247 15:17:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:43.247 15:17:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:09:43.247 15:17:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:09:43.247 15:17:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:43.247 15:17:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:43.247 15:17:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:09:43.247 15:17:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:09:43.247 15:17:56 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:43.247 15:17:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:43.247 15:17:56 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:43.505 [2024-07-11 15:17:56.986849] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69544) is not found. Dropping the request. 00:09:53.481 Executing: test_write_invalid_db 00:09:53.481 Waiting for AER completion... 00:09:53.481 Failure: test_write_invalid_db 00:09:53.481 00:09:53.481 Executing: test_invalid_db_write_overflow_sq 00:09:53.481 Waiting for AER completion... 00:09:53.481 Failure: test_invalid_db_write_overflow_sq 00:09:53.481 00:09:53.481 Executing: test_invalid_db_write_overflow_cq 00:09:53.481 Waiting for AER completion... 00:09:53.481 Failure: test_invalid_db_write_overflow_cq 00:09:53.481 00:09:53.481 15:18:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:53.481 15:18:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:53.481 [2024-07-11 15:18:07.004715] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69544) is not found. Dropping the request. 00:10:03.453 Executing: test_write_invalid_db 00:10:03.453 Waiting for AER completion... 00:10:03.453 Failure: test_write_invalid_db 00:10:03.453 00:10:03.453 Executing: test_invalid_db_write_overflow_sq 00:10:03.453 Waiting for AER completion... 00:10:03.453 Failure: test_invalid_db_write_overflow_sq 00:10:03.453 00:10:03.453 Executing: test_invalid_db_write_overflow_cq 00:10:03.453 Waiting for AER completion... 00:10:03.453 Failure: test_invalid_db_write_overflow_cq 00:10:03.453 00:10:03.453 15:18:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:03.453 15:18:16 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:03.711 [2024-07-11 15:18:17.079775] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69544) is not found. Dropping the request. 00:10:13.699 Executing: test_write_invalid_db 00:10:13.699 Waiting for AER completion... 00:10:13.699 Failure: test_write_invalid_db 00:10:13.699 00:10:13.699 Executing: test_invalid_db_write_overflow_sq 00:10:13.699 Waiting for AER completion... 00:10:13.699 Failure: test_invalid_db_write_overflow_sq 00:10:13.699 00:10:13.699 Executing: test_invalid_db_write_overflow_cq 00:10:13.699 Waiting for AER completion... 00:10:13.699 Failure: test_invalid_db_write_overflow_cq 00:10:13.699 00:10:13.699 15:18:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:13.699 15:18:26 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:13.699 [2024-07-11 15:18:27.112639] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69544) is not found. Dropping the request. 00:10:23.674 Executing: test_write_invalid_db 00:10:23.674 Waiting for AER completion... 00:10:23.674 Failure: test_write_invalid_db 00:10:23.674 00:10:23.674 Executing: test_invalid_db_write_overflow_sq 00:10:23.674 Waiting for AER completion... 00:10:23.674 Failure: test_invalid_db_write_overflow_sq 00:10:23.674 00:10:23.674 Executing: test_invalid_db_write_overflow_cq 00:10:23.674 Waiting for AER completion... 00:10:23.674 Failure: test_invalid_db_write_overflow_cq 00:10:23.674 00:10:23.674 ************************************ 00:10:23.674 END TEST nvme_doorbell_aers 00:10:23.674 ************************************ 00:10:23.674 00:10:23.674 real 0m40.251s 00:10:23.674 user 0m34.129s 00:10:23.674 sys 0m5.778s 00:10:23.674 15:18:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:23.674 15:18:36 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:23.674 15:18:36 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:23.674 15:18:36 nvme -- nvme/nvme.sh@97 -- # uname 00:10:23.674 15:18:36 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:23.674 15:18:36 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:23.674 15:18:36 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:23.674 15:18:36 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.674 15:18:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:23.674 ************************************ 00:10:23.674 START TEST nvme_multi_aen 00:10:23.674 ************************************ 00:10:23.674 15:18:36 nvme.nvme_multi_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:23.674 [2024-07-11 15:18:37.206699] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69544) is not found. Dropping the request. 00:10:23.674 [2024-07-11 15:18:37.206816] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69544) is not found. Dropping the request. 00:10:23.674 [2024-07-11 15:18:37.206847] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69544) is not found. Dropping the request. 00:10:23.674 [2024-07-11 15:18:37.208628] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69544) is not found. Dropping the request. 00:10:23.674 [2024-07-11 15:18:37.208692] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69544) is not found. Dropping the request. 00:10:23.674 [2024-07-11 15:18:37.208711] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69544) is not found. Dropping the request. 00:10:23.674 [2024-07-11 15:18:37.210282] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69544) is not found. Dropping the request. 00:10:23.674 [2024-07-11 15:18:37.210329] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69544) is not found. Dropping the request. 00:10:23.674 [2024-07-11 15:18:37.210357] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69544) is not found. Dropping the request. 00:10:23.674 [2024-07-11 15:18:37.211814] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69544) is not found. Dropping the request. 00:10:23.674 [2024-07-11 15:18:37.211858] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69544) is not found. Dropping the request. 00:10:23.674 [2024-07-11 15:18:37.211890] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69544) is not found. Dropping the request. 00:10:23.674 Child process pid: 70063 00:10:23.932 [Child] Asynchronous Event Request test 00:10:23.932 [Child] Attached to 0000:00:11.0 00:10:23.932 [Child] Attached to 0000:00:13.0 00:10:23.932 [Child] Attached to 0000:00:10.0 00:10:23.932 [Child] Attached to 0000:00:12.0 00:10:23.932 [Child] Registering asynchronous event callbacks... 00:10:23.932 [Child] Getting orig temperature thresholds of all controllers 00:10:23.932 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:23.932 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:23.932 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:23.932 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:23.932 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:23.932 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:23.932 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:23.932 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:23.932 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:23.932 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:23.932 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:23.933 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:23.933 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:23.933 [Child] Cleaning up... 00:10:23.933 Asynchronous Event Request test 00:10:23.933 Attached to 0000:00:11.0 00:10:23.933 Attached to 0000:00:13.0 00:10:23.933 Attached to 0000:00:10.0 00:10:23.933 Attached to 0000:00:12.0 00:10:23.933 Reset controller to setup AER completions for this process 00:10:23.933 Registering asynchronous event callbacks... 00:10:23.933 Getting orig temperature thresholds of all controllers 00:10:23.933 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:23.933 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:23.933 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:23.933 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:23.933 Setting all controllers temperature threshold low to trigger AER 00:10:23.933 Waiting for all controllers temperature threshold to be set lower 00:10:23.933 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:23.933 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:23.933 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:23.933 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:23.933 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:23.933 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:23.933 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:23.933 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:23.933 Waiting for all controllers to trigger AER and reset threshold 00:10:23.933 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:23.933 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:23.933 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:23.933 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:23.933 Cleaning up... 00:10:23.933 00:10:23.933 real 0m0.609s 00:10:23.933 user 0m0.243s 00:10:23.933 sys 0m0.260s 00:10:23.933 15:18:37 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:23.933 15:18:37 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:23.933 ************************************ 00:10:23.933 END TEST nvme_multi_aen 00:10:23.933 ************************************ 00:10:24.191 15:18:37 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:24.191 15:18:37 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:24.191 15:18:37 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:24.191 15:18:37 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.191 15:18:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:24.191 ************************************ 00:10:24.191 START TEST nvme_startup 00:10:24.191 ************************************ 00:10:24.191 15:18:37 nvme.nvme_startup -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:24.449 Initializing NVMe Controllers 00:10:24.449 Attached to 0000:00:11.0 00:10:24.449 Attached to 0000:00:13.0 00:10:24.449 Attached to 0000:00:10.0 00:10:24.449 Attached to 0000:00:12.0 00:10:24.449 Initialization complete. 00:10:24.449 Time used:192699.969 (us). 00:10:24.449 ************************************ 00:10:24.449 END TEST nvme_startup 00:10:24.449 ************************************ 00:10:24.449 00:10:24.449 real 0m0.278s 00:10:24.449 user 0m0.099s 00:10:24.449 sys 0m0.135s 00:10:24.449 15:18:37 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:24.449 15:18:37 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:24.449 15:18:37 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:24.449 15:18:37 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:24.449 15:18:37 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:24.449 15:18:37 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.449 15:18:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:24.449 ************************************ 00:10:24.449 START TEST nvme_multi_secondary 00:10:24.449 ************************************ 00:10:24.449 15:18:37 nvme.nvme_multi_secondary -- common/autotest_common.sh@1123 -- # nvme_multi_secondary 00:10:24.449 15:18:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=70119 00:10:24.449 15:18:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:24.449 15:18:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=70120 00:10:24.449 15:18:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:24.449 15:18:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:27.730 Initializing NVMe Controllers 00:10:27.730 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:27.730 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:27.730 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:27.730 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:27.730 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:27.730 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:27.730 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:27.730 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:27.730 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:27.730 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:27.730 Initialization complete. Launching workers. 00:10:27.730 ======================================================== 00:10:27.730 Latency(us) 00:10:27.731 Device Information : IOPS MiB/s Average min max 00:10:27.731 PCIE (0000:00:11.0) NSID 1 from core 2: 2567.75 10.03 6230.15 1235.81 12936.07 00:10:27.731 PCIE (0000:00:13.0) NSID 1 from core 2: 2567.75 10.03 6230.85 1198.70 12885.20 00:10:27.731 PCIE (0000:00:10.0) NSID 1 from core 2: 2567.75 10.03 6230.19 1200.78 13060.44 00:10:27.731 PCIE (0000:00:12.0) NSID 1 from core 2: 2567.75 10.03 6231.34 1205.32 12122.17 00:10:27.731 PCIE (0000:00:12.0) NSID 2 from core 2: 2567.75 10.03 6231.95 1205.87 12865.25 00:10:27.731 PCIE (0000:00:12.0) NSID 3 from core 2: 2567.75 10.03 6231.91 1212.95 12817.36 00:10:27.731 ======================================================== 00:10:27.731 Total : 15406.48 60.18 6231.07 1198.70 13060.44 00:10:27.731 00:10:27.731 15:18:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 70119 00:10:28.001 Initializing NVMe Controllers 00:10:28.001 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:28.001 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:28.001 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:28.001 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:28.001 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:28.001 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:28.001 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:28.001 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:28.001 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:28.001 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:28.001 Initialization complete. Launching workers. 00:10:28.001 ======================================================== 00:10:28.001 Latency(us) 00:10:28.001 Device Information : IOPS MiB/s Average min max 00:10:28.001 PCIE (0000:00:11.0) NSID 1 from core 1: 5384.93 21.03 2970.73 1489.42 5810.13 00:10:28.001 PCIE (0000:00:13.0) NSID 1 from core 1: 5384.93 21.03 2970.64 1417.00 5872.57 00:10:28.001 PCIE (0000:00:10.0) NSID 1 from core 1: 5384.93 21.03 2969.30 1450.62 5690.40 00:10:28.001 PCIE (0000:00:12.0) NSID 1 from core 1: 5384.93 21.03 2970.43 1500.52 5640.10 00:10:28.001 PCIE (0000:00:12.0) NSID 2 from core 1: 5384.93 21.03 2970.35 1498.50 5968.41 00:10:28.001 PCIE (0000:00:12.0) NSID 3 from core 1: 5384.93 21.03 2970.39 1442.44 5827.28 00:10:28.001 ======================================================== 00:10:28.001 Total : 32309.60 126.21 2970.31 1417.00 5968.41 00:10:28.001 00:10:29.963 Initializing NVMe Controllers 00:10:29.963 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:29.963 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:29.963 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:29.963 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:29.963 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:29.963 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:29.963 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:29.963 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:29.963 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:29.963 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:29.963 Initialization complete. Launching workers. 00:10:29.963 ======================================================== 00:10:29.963 Latency(us) 00:10:29.963 Device Information : IOPS MiB/s Average min max 00:10:29.963 PCIE (0000:00:11.0) NSID 1 from core 0: 7656.79 29.91 2089.20 1012.67 6342.07 00:10:29.963 PCIE (0000:00:13.0) NSID 1 from core 0: 7656.79 29.91 2089.17 1022.14 5982.53 00:10:29.963 PCIE (0000:00:10.0) NSID 1 from core 0: 7656.79 29.91 2088.02 1002.80 5531.12 00:10:29.964 PCIE (0000:00:12.0) NSID 1 from core 0: 7656.79 29.91 2089.05 1033.86 6509.89 00:10:29.964 PCIE (0000:00:12.0) NSID 2 from core 0: 7656.79 29.91 2089.00 949.60 6611.58 00:10:29.964 PCIE (0000:00:12.0) NSID 3 from core 0: 7656.79 29.91 2088.95 916.14 6557.52 00:10:29.964 ======================================================== 00:10:29.964 Total : 45940.72 179.46 2088.90 916.14 6611.58 00:10:29.964 00:10:29.964 15:18:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 70120 00:10:29.964 15:18:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=70189 00:10:29.964 15:18:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:29.964 15:18:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=70190 00:10:29.964 15:18:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:29.964 15:18:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:33.247 Initializing NVMe Controllers 00:10:33.247 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:33.247 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:33.247 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:33.247 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:33.247 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:33.247 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:33.247 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:33.247 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:33.247 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:33.247 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:33.247 Initialization complete. Launching workers. 00:10:33.247 ======================================================== 00:10:33.247 Latency(us) 00:10:33.247 Device Information : IOPS MiB/s Average min max 00:10:33.247 PCIE (0000:00:11.0) NSID 1 from core 1: 5490.11 21.45 2913.82 1199.86 5911.31 00:10:33.247 PCIE (0000:00:13.0) NSID 1 from core 1: 5490.11 21.45 2914.04 1210.32 5778.92 00:10:33.247 PCIE (0000:00:10.0) NSID 1 from core 1: 5490.11 21.45 2912.98 1156.46 6401.82 00:10:33.247 PCIE (0000:00:12.0) NSID 1 from core 1: 5490.11 21.45 2914.33 1219.79 5578.52 00:10:33.247 PCIE (0000:00:12.0) NSID 2 from core 1: 5490.11 21.45 2914.43 1076.93 5523.54 00:10:33.247 PCIE (0000:00:12.0) NSID 3 from core 1: 5490.11 21.45 2914.42 1190.35 5632.84 00:10:33.247 ======================================================== 00:10:33.247 Total : 32940.66 128.67 2914.00 1076.93 6401.82 00:10:33.247 00:10:33.247 Initializing NVMe Controllers 00:10:33.247 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:33.247 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:33.247 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:33.247 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:33.247 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:33.247 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:33.247 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:33.247 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:33.247 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:33.247 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:33.247 Initialization complete. Launching workers. 00:10:33.247 ======================================================== 00:10:33.247 Latency(us) 00:10:33.247 Device Information : IOPS MiB/s Average min max 00:10:33.247 PCIE (0000:00:11.0) NSID 1 from core 0: 5582.89 21.81 2865.35 1080.66 5244.44 00:10:33.247 PCIE (0000:00:13.0) NSID 1 from core 0: 5582.89 21.81 2865.23 1099.21 5248.35 00:10:33.247 PCIE (0000:00:10.0) NSID 1 from core 0: 5582.89 21.81 2863.93 1057.49 5067.74 00:10:33.247 PCIE (0000:00:12.0) NSID 1 from core 0: 5582.89 21.81 2865.00 1081.96 5250.17 00:10:33.247 PCIE (0000:00:12.0) NSID 2 from core 0: 5582.89 21.81 2864.87 897.74 5343.68 00:10:33.248 PCIE (0000:00:12.0) NSID 3 from core 0: 5582.89 21.81 2864.95 1072.55 5550.82 00:10:33.248 ======================================================== 00:10:33.248 Total : 33497.36 130.85 2864.89 897.74 5550.82 00:10:33.248 00:10:35.150 Initializing NVMe Controllers 00:10:35.150 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:35.150 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:35.150 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:35.150 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:35.150 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:35.150 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:35.150 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:35.150 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:35.150 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:35.150 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:35.150 Initialization complete. Launching workers. 00:10:35.150 ======================================================== 00:10:35.150 Latency(us) 00:10:35.150 Device Information : IOPS MiB/s Average min max 00:10:35.150 PCIE (0000:00:11.0) NSID 1 from core 2: 3652.57 14.27 4380.05 1127.10 13040.11 00:10:35.150 PCIE (0000:00:13.0) NSID 1 from core 2: 3652.57 14.27 4380.04 1126.82 13186.09 00:10:35.150 PCIE (0000:00:10.0) NSID 1 from core 2: 3652.57 14.27 4378.43 1043.57 13298.93 00:10:35.150 PCIE (0000:00:12.0) NSID 1 from core 2: 3652.57 14.27 4379.46 1081.57 13063.49 00:10:35.150 PCIE (0000:00:12.0) NSID 2 from core 2: 3652.57 14.27 4379.57 948.19 12371.48 00:10:35.150 PCIE (0000:00:12.0) NSID 3 from core 2: 3652.57 14.27 4379.02 860.29 12996.48 00:10:35.150 ======================================================== 00:10:35.150 Total : 21915.43 85.61 4379.43 860.29 13298.93 00:10:35.150 00:10:35.409 ************************************ 00:10:35.409 END TEST nvme_multi_secondary 00:10:35.409 ************************************ 00:10:35.409 15:18:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 70189 00:10:35.409 15:18:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 70190 00:10:35.409 00:10:35.409 real 0m10.887s 00:10:35.409 user 0m18.597s 00:10:35.409 sys 0m0.855s 00:10:35.409 15:18:48 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:35.409 15:18:48 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:35.409 15:18:48 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:35.409 15:18:48 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:35.409 15:18:48 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:35.409 15:18:48 nvme -- common/autotest_common.sh@1087 -- # [[ -e /proc/69130 ]] 00:10:35.409 15:18:48 nvme -- common/autotest_common.sh@1088 -- # kill 69130 00:10:35.409 15:18:48 nvme -- common/autotest_common.sh@1089 -- # wait 69130 00:10:35.409 [2024-07-11 15:18:48.862610] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70062) is not found. Dropping the request. 00:10:35.409 [2024-07-11 15:18:48.862743] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70062) is not found. Dropping the request. 00:10:35.409 [2024-07-11 15:18:48.862786] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70062) is not found. Dropping the request. 00:10:35.409 [2024-07-11 15:18:48.862823] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70062) is not found. Dropping the request. 00:10:35.409 [2024-07-11 15:18:48.866439] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70062) is not found. Dropping the request. 00:10:35.409 [2024-07-11 15:18:48.866511] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70062) is not found. Dropping the request. 00:10:35.409 [2024-07-11 15:18:48.866535] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70062) is not found. Dropping the request. 00:10:35.409 [2024-07-11 15:18:48.866575] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70062) is not found. Dropping the request. 00:10:35.409 [2024-07-11 15:18:48.868842] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70062) is not found. Dropping the request. 00:10:35.409 [2024-07-11 15:18:48.868917] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70062) is not found. Dropping the request. 00:10:35.409 [2024-07-11 15:18:48.868950] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70062) is not found. Dropping the request. 00:10:35.409 [2024-07-11 15:18:48.868971] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70062) is not found. Dropping the request. 00:10:35.409 [2024-07-11 15:18:48.871289] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70062) is not found. Dropping the request. 00:10:35.409 [2024-07-11 15:18:48.871362] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70062) is not found. Dropping the request. 00:10:35.409 [2024-07-11 15:18:48.871391] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70062) is not found. Dropping the request. 00:10:35.409 [2024-07-11 15:18:48.871413] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70062) is not found. Dropping the request. 00:10:35.668 15:18:49 nvme -- common/autotest_common.sh@1091 -- # rm -f /var/run/spdk_stub0 00:10:35.668 15:18:49 nvme -- common/autotest_common.sh@1095 -- # echo 2 00:10:35.668 15:18:49 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:35.668 15:18:49 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:35.668 15:18:49 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.668 15:18:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:35.668 ************************************ 00:10:35.668 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:35.668 ************************************ 00:10:35.668 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:35.668 * Looking for test storage... 00:10:35.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:35.668 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:35.668 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:35.668 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:35.668 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:35.668 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:35.668 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:35.668 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:10:35.668 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:10:35.668 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:10:35.668 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:10:35.668 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:35.668 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:10:35.668 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:35.668 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:35.668 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:35.927 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:35.927 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:35.927 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:10:35.927 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:35.927 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:35.927 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=70344 00:10:35.927 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:35.927 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:35.927 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 70344 00:10:35.927 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 70344 ']' 00:10:35.927 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.927 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:35.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.927 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.927 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:35.927 15:18:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:35.927 [2024-07-11 15:18:49.417317] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:35.927 [2024-07-11 15:18:49.418104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70344 ] 00:10:36.185 [2024-07-11 15:18:49.593973] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.444 [2024-07-11 15:18:49.827259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.444 [2024-07-11 15:18:49.827470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.444 [2024-07-11 15:18:49.827608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.444 [2024-07-11 15:18:49.827615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.009 15:18:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:37.009 15:18:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:10:37.009 15:18:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:37.009 15:18:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.009 15:18:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:37.009 nvme0n1 00:10:37.009 15:18:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.009 15:18:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:37.009 15:18:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_M0J81.txt 00:10:37.009 15:18:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:37.009 15:18:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.009 15:18:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:37.009 true 00:10:37.009 15:18:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.009 15:18:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:37.009 15:18:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1720711130 00:10:37.009 15:18:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=70367 00:10:37.009 15:18:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:37.009 15:18:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:37.010 15:18:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:39.538 [2024-07-11 15:18:52.602790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:39.538 [2024-07-11 15:18:52.603290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:39.538 [2024-07-11 15:18:52.603335] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:39.538 [2024-07-11 15:18:52.603359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.538 [2024-07-11 15:18:52.605487] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.538 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 70367 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 70367 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 70367 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_M0J81.txt 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:39.538 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:39.539 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:39.539 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:39.539 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:39.539 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_M0J81.txt 00:10:39.539 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 70344 00:10:39.539 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 70344 ']' 00:10:39.539 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 70344 00:10:39.539 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:10:39.539 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:39.539 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70344 00:10:39.539 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:39.539 killing process with pid 70344 00:10:39.539 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:39.539 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70344' 00:10:39.539 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 70344 00:10:39.539 15:18:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 70344 00:10:41.441 15:18:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:41.441 15:18:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:41.441 00:10:41.441 real 0m5.511s 00:10:41.441 user 0m19.061s 00:10:41.441 sys 0m0.549s 00:10:41.441 15:18:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:41.441 ************************************ 00:10:41.441 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:41.441 15:18:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:41.441 ************************************ 00:10:41.441 15:18:54 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:41.441 15:18:54 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:41.441 15:18:54 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:41.441 15:18:54 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:41.441 15:18:54 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:41.441 15:18:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:41.441 ************************************ 00:10:41.441 START TEST nvme_fio 00:10:41.441 ************************************ 00:10:41.441 15:18:54 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:10:41.441 15:18:54 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:41.441 15:18:54 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:41.441 15:18:54 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:41.441 15:18:54 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:41.441 15:18:54 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:10:41.441 15:18:54 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:41.441 15:18:54 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:41.441 15:18:54 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:41.441 15:18:54 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:41.441 15:18:54 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:41.441 15:18:54 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:41.441 15:18:54 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:41.441 15:18:54 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:41.441 15:18:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:41.441 15:18:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:41.700 15:18:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:41.700 15:18:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:41.958 15:18:55 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:41.959 15:18:55 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:41.959 15:18:55 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:41.959 15:18:55 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:41.959 15:18:55 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:41.959 15:18:55 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:41.959 15:18:55 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:41.959 15:18:55 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:41.959 15:18:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:41.959 15:18:55 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:41.959 15:18:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:41.959 15:18:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:41.959 15:18:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:41.959 15:18:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:41.959 15:18:55 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:41.959 15:18:55 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:41.959 15:18:55 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:41.959 15:18:55 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:42.217 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:42.217 fio-3.35 00:10:42.217 Starting 1 thread 00:10:44.751 00:10:44.751 test: (groupid=0, jobs=1): err= 0: pid=70512: Thu Jul 11 15:18:58 2024 00:10:44.751 read: IOPS=14.8k, BW=57.7MiB/s (60.5MB/s)(115MiB/2001msec) 00:10:44.751 slat (nsec): min=4149, max=55164, avg=6424.71, stdev=3412.11 00:10:44.751 clat (usec): min=368, max=8922, avg=4310.57, stdev=533.34 00:10:44.751 lat (usec): min=373, max=8974, avg=4317.00, stdev=533.92 00:10:44.751 clat percentiles (usec): 00:10:44.751 | 1.00th=[ 3490], 5.00th=[ 3621], 10.00th=[ 3720], 20.00th=[ 3818], 00:10:44.751 | 30.00th=[ 3949], 40.00th=[ 4080], 50.00th=[ 4228], 60.00th=[ 4424], 00:10:44.751 | 70.00th=[ 4555], 80.00th=[ 4752], 90.00th=[ 5014], 95.00th=[ 5211], 00:10:44.751 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 6587], 99.95th=[ 7439], 00:10:44.751 | 99.99th=[ 8717] 00:10:44.751 bw ( KiB/s): min=54520, max=58976, per=96.98%, avg=57266.67, stdev=2402.30, samples=3 00:10:44.751 iops : min=13630, max=14744, avg=14316.67, stdev=600.57, samples=3 00:10:44.751 write: IOPS=14.8k, BW=57.7MiB/s (60.5MB/s)(116MiB/2001msec); 0 zone resets 00:10:44.751 slat (nsec): min=4272, max=50917, avg=6625.32, stdev=3484.85 00:10:44.751 clat (usec): min=262, max=8698, avg=4322.17, stdev=537.51 00:10:44.751 lat (usec): min=269, max=8716, avg=4328.80, stdev=538.12 00:10:44.751 clat percentiles (usec): 00:10:44.751 | 1.00th=[ 3490], 5.00th=[ 3621], 10.00th=[ 3720], 20.00th=[ 3818], 00:10:44.751 | 30.00th=[ 3949], 40.00th=[ 4113], 50.00th=[ 4293], 60.00th=[ 4424], 00:10:44.751 | 70.00th=[ 4621], 80.00th=[ 4817], 90.00th=[ 5080], 95.00th=[ 5276], 00:10:44.751 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 6718], 99.95th=[ 7635], 00:10:44.751 | 99.99th=[ 8455] 00:10:44.751 bw ( KiB/s): min=54768, max=58560, per=96.70%, avg=57162.67, stdev=2083.46, samples=3 00:10:44.751 iops : min=13692, max=14640, avg=14290.67, stdev=520.87, samples=3 00:10:44.751 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.02% 00:10:44.751 lat (msec) : 2=0.04%, 4=33.76%, 10=66.15% 00:10:44.751 cpu : usr=98.85%, sys=0.10%, ctx=2, majf=0, minf=606 00:10:44.751 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:44.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.751 issued rwts: total=29540,29570,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.751 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.751 00:10:44.751 Run status group 0 (all jobs): 00:10:44.751 READ: bw=57.7MiB/s (60.5MB/s), 57.7MiB/s-57.7MiB/s (60.5MB/s-60.5MB/s), io=115MiB (121MB), run=2001-2001msec 00:10:44.751 WRITE: bw=57.7MiB/s (60.5MB/s), 57.7MiB/s-57.7MiB/s (60.5MB/s-60.5MB/s), io=116MiB (121MB), run=2001-2001msec 00:10:45.010 ----------------------------------------------------- 00:10:45.010 Suppressions used: 00:10:45.010 count bytes template 00:10:45.010 1 32 /usr/src/fio/parse.c 00:10:45.010 1 8 libtcmalloc_minimal.so 00:10:45.010 ----------------------------------------------------- 00:10:45.010 00:10:45.010 15:18:58 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:45.010 15:18:58 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:45.010 15:18:58 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:45.010 15:18:58 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:45.269 15:18:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:45.269 15:18:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:45.534 15:18:58 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:45.534 15:18:58 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:45.535 15:18:58 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:45.535 15:18:58 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:45.535 15:18:58 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:45.535 15:18:58 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:45.535 15:18:58 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:45.535 15:18:58 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:45.535 15:18:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:45.535 15:18:58 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:45.535 15:18:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:45.535 15:18:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:45.535 15:18:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:45.535 15:18:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:45.535 15:18:58 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:45.535 15:18:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:45.535 15:18:58 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:45.535 15:18:58 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:45.535 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:45.535 fio-3.35 00:10:45.535 Starting 1 thread 00:10:48.837 00:10:48.837 test: (groupid=0, jobs=1): err= 0: pid=70567: Thu Jul 11 15:19:02 2024 00:10:48.837 read: IOPS=14.6k, BW=57.0MiB/s (59.7MB/s)(114MiB/2001msec) 00:10:48.837 slat (nsec): min=4170, max=50415, avg=6503.86, stdev=2917.77 00:10:48.837 clat (usec): min=323, max=10191, avg=4360.70, stdev=705.06 00:10:48.837 lat (usec): min=330, max=10239, avg=4367.20, stdev=705.89 00:10:48.837 clat percentiles (usec): 00:10:48.837 | 1.00th=[ 3359], 5.00th=[ 3589], 10.00th=[ 3687], 20.00th=[ 3818], 00:10:48.837 | 30.00th=[ 3949], 40.00th=[ 4113], 50.00th=[ 4293], 60.00th=[ 4424], 00:10:48.837 | 70.00th=[ 4555], 80.00th=[ 4817], 90.00th=[ 5145], 95.00th=[ 5407], 00:10:48.837 | 99.00th=[ 7177], 99.50th=[ 8455], 99.90th=[ 9110], 99.95th=[ 9110], 00:10:48.837 | 99.99th=[10159] 00:10:48.837 bw ( KiB/s): min=54824, max=61264, per=100.00%, avg=58666.67, stdev=3395.81, samples=3 00:10:48.837 iops : min=13706, max=15316, avg=14666.67, stdev=848.95, samples=3 00:10:48.837 write: IOPS=14.6k, BW=57.1MiB/s (59.9MB/s)(114MiB/2001msec); 0 zone resets 00:10:48.837 slat (nsec): min=4462, max=70295, avg=6700.25, stdev=3040.26 00:10:48.837 clat (usec): min=287, max=10067, avg=4374.84, stdev=707.63 00:10:48.837 lat (usec): min=294, max=10137, avg=4381.54, stdev=708.51 00:10:48.837 clat percentiles (usec): 00:10:48.837 | 1.00th=[ 3359], 5.00th=[ 3589], 10.00th=[ 3687], 20.00th=[ 3818], 00:10:48.837 | 30.00th=[ 3982], 40.00th=[ 4146], 50.00th=[ 4293], 60.00th=[ 4424], 00:10:48.837 | 70.00th=[ 4555], 80.00th=[ 4817], 90.00th=[ 5211], 95.00th=[ 5407], 00:10:48.837 | 99.00th=[ 7177], 99.50th=[ 8455], 99.90th=[ 9110], 99.95th=[ 9241], 00:10:48.837 | 99.99th=[ 9765] 00:10:48.837 bw ( KiB/s): min=53856, max=61680, per=100.00%, avg=58496.00, stdev=4110.19, samples=3 00:10:48.837 iops : min=13464, max=15420, avg=14624.00, stdev=1027.55, samples=3 00:10:48.837 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:48.837 lat (msec) : 2=0.06%, 4=32.32%, 10=67.58%, 20=0.01% 00:10:48.837 cpu : usr=99.00%, sys=0.00%, ctx=3, majf=0, minf=606 00:10:48.837 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:48.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.837 issued rwts: total=29177,29245,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.837 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.837 00:10:48.837 Run status group 0 (all jobs): 00:10:48.837 READ: bw=57.0MiB/s (59.7MB/s), 57.0MiB/s-57.0MiB/s (59.7MB/s-59.7MB/s), io=114MiB (120MB), run=2001-2001msec 00:10:48.837 WRITE: bw=57.1MiB/s (59.9MB/s), 57.1MiB/s-57.1MiB/s (59.9MB/s-59.9MB/s), io=114MiB (120MB), run=2001-2001msec 00:10:48.837 ----------------------------------------------------- 00:10:48.837 Suppressions used: 00:10:48.837 count bytes template 00:10:48.837 1 32 /usr/src/fio/parse.c 00:10:48.837 1 8 libtcmalloc_minimal.so 00:10:48.837 ----------------------------------------------------- 00:10:48.837 00:10:48.837 15:19:02 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:48.837 15:19:02 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:48.837 15:19:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:48.837 15:19:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:49.095 15:19:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:49.095 15:19:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:49.355 15:19:02 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:49.355 15:19:02 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:49.355 15:19:02 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:49.355 15:19:02 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:49.355 15:19:02 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:49.355 15:19:02 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:49.355 15:19:02 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:49.355 15:19:02 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:49.355 15:19:02 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:49.355 15:19:02 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:49.355 15:19:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:49.355 15:19:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:49.355 15:19:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:49.355 15:19:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:49.355 15:19:02 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:49.355 15:19:02 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:49.355 15:19:02 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:49.355 15:19:02 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:49.612 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:49.612 fio-3.35 00:10:49.612 Starting 1 thread 00:10:52.894 00:10:52.894 test: (groupid=0, jobs=1): err= 0: pid=70632: Thu Jul 11 15:19:06 2024 00:10:52.894 read: IOPS=13.1k, BW=51.4MiB/s (53.9MB/s)(103MiB/2001msec) 00:10:52.894 slat (usec): min=4, max=120, avg= 7.18, stdev= 3.89 00:10:52.894 clat (usec): min=304, max=10930, avg=4841.03, stdev=769.06 00:10:52.894 lat (usec): min=310, max=10981, avg=4848.21, stdev=770.03 00:10:52.894 clat percentiles (usec): 00:10:52.894 | 1.00th=[ 3490], 5.00th=[ 3785], 10.00th=[ 3916], 20.00th=[ 4228], 00:10:52.894 | 30.00th=[ 4424], 40.00th=[ 4621], 50.00th=[ 4752], 60.00th=[ 5014], 00:10:52.894 | 70.00th=[ 5211], 80.00th=[ 5473], 90.00th=[ 5669], 95.00th=[ 5866], 00:10:52.894 | 99.00th=[ 7504], 99.50th=[ 8455], 99.90th=[ 9896], 99.95th=[10290], 00:10:52.894 | 99.99th=[10814] 00:10:52.894 bw ( KiB/s): min=51888, max=53656, per=100.00%, avg=52618.67, stdev=923.03, samples=3 00:10:52.894 iops : min=12972, max=13414, avg=13154.67, stdev=230.76, samples=3 00:10:52.894 write: IOPS=13.2k, BW=51.4MiB/s (53.9MB/s)(103MiB/2001msec); 0 zone resets 00:10:52.894 slat (nsec): min=4196, max=75332, avg=7330.58, stdev=3883.42 00:10:52.894 clat (usec): min=267, max=10827, avg=4854.44, stdev=779.20 00:10:52.894 lat (usec): min=272, max=10844, avg=4861.77, stdev=780.20 00:10:52.894 clat percentiles (usec): 00:10:52.894 | 1.00th=[ 3523], 5.00th=[ 3785], 10.00th=[ 3949], 20.00th=[ 4228], 00:10:52.894 | 30.00th=[ 4424], 40.00th=[ 4621], 50.00th=[ 4817], 60.00th=[ 5014], 00:10:52.894 | 70.00th=[ 5211], 80.00th=[ 5473], 90.00th=[ 5735], 95.00th=[ 5932], 00:10:52.894 | 99.00th=[ 7635], 99.50th=[ 8455], 99.90th=[ 9634], 99.95th=[10028], 00:10:52.894 | 99.99th=[10552] 00:10:52.894 bw ( KiB/s): min=52168, max=53488, per=100.00%, avg=52685.33, stdev=704.74, samples=3 00:10:52.894 iops : min=13042, max=13372, avg=13171.33, stdev=176.19, samples=3 00:10:52.894 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:10:52.894 lat (msec) : 2=0.04%, 4=12.39%, 10=87.47%, 20=0.06% 00:10:52.894 cpu : usr=98.75%, sys=0.05%, ctx=5, majf=0, minf=606 00:10:52.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:52.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.894 issued rwts: total=26313,26322,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.894 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.894 00:10:52.894 Run status group 0 (all jobs): 00:10:52.894 READ: bw=51.4MiB/s (53.9MB/s), 51.4MiB/s-51.4MiB/s (53.9MB/s-53.9MB/s), io=103MiB (108MB), run=2001-2001msec 00:10:52.894 WRITE: bw=51.4MiB/s (53.9MB/s), 51.4MiB/s-51.4MiB/s (53.9MB/s-53.9MB/s), io=103MiB (108MB), run=2001-2001msec 00:10:52.894 ----------------------------------------------------- 00:10:52.894 Suppressions used: 00:10:52.894 count bytes template 00:10:52.894 1 32 /usr/src/fio/parse.c 00:10:52.894 1 8 libtcmalloc_minimal.so 00:10:52.894 ----------------------------------------------------- 00:10:52.894 00:10:52.894 15:19:06 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:52.894 15:19:06 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:52.894 15:19:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:52.894 15:19:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:53.152 15:19:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:53.152 15:19:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:53.411 15:19:06 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:53.411 15:19:06 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:53.411 15:19:06 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:53.411 15:19:06 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:53.411 15:19:06 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:53.411 15:19:06 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:53.411 15:19:06 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:53.411 15:19:06 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:53.411 15:19:06 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:53.411 15:19:06 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:53.411 15:19:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:53.411 15:19:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:53.411 15:19:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:53.411 15:19:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:53.411 15:19:06 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:53.411 15:19:06 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:53.411 15:19:06 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:53.411 15:19:06 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:53.411 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:53.411 fio-3.35 00:10:53.411 Starting 1 thread 00:10:57.601 00:10:57.601 test: (groupid=0, jobs=1): err= 0: pid=70696: Thu Jul 11 15:19:10 2024 00:10:57.601 read: IOPS=14.4k, BW=56.4MiB/s (59.1MB/s)(113MiB/2001msec) 00:10:57.601 slat (nsec): min=4299, max=64622, avg=6703.11, stdev=3530.40 00:10:57.601 clat (usec): min=249, max=12756, avg=4411.15, stdev=888.41 00:10:57.601 lat (usec): min=255, max=12808, avg=4417.85, stdev=889.87 00:10:57.601 clat percentiles (usec): 00:10:57.601 | 1.00th=[ 3425], 5.00th=[ 3621], 10.00th=[ 3720], 20.00th=[ 3818], 00:10:57.601 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4146], 60.00th=[ 4359], 00:10:57.601 | 70.00th=[ 4555], 80.00th=[ 4817], 90.00th=[ 5276], 95.00th=[ 5735], 00:10:57.601 | 99.00th=[ 7832], 99.50th=[ 8160], 99.90th=[ 8848], 99.95th=[10683], 00:10:57.601 | 99.99th=[12649] 00:10:57.601 bw ( KiB/s): min=50056, max=63560, per=100.00%, avg=57800.00, stdev=6967.19, samples=3 00:10:57.601 iops : min=12514, max=15890, avg=14450.00, stdev=1741.80, samples=3 00:10:57.601 write: IOPS=14.4k, BW=56.4MiB/s (59.1MB/s)(113MiB/2001msec); 0 zone resets 00:10:57.601 slat (nsec): min=4400, max=69300, avg=6867.20, stdev=3603.37 00:10:57.601 clat (usec): min=283, max=12519, avg=4423.52, stdev=890.21 00:10:57.601 lat (usec): min=291, max=12536, avg=4430.38, stdev=891.72 00:10:57.601 clat percentiles (usec): 00:10:57.601 | 1.00th=[ 3458], 5.00th=[ 3654], 10.00th=[ 3720], 20.00th=[ 3818], 00:10:57.601 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4146], 60.00th=[ 4359], 00:10:57.601 | 70.00th=[ 4621], 80.00th=[ 4817], 90.00th=[ 5276], 95.00th=[ 5800], 00:10:57.601 | 99.00th=[ 7832], 99.50th=[ 8160], 99.90th=[ 8979], 99.95th=[10945], 00:10:57.601 | 99.99th=[12256] 00:10:57.601 bw ( KiB/s): min=50440, max=63000, per=99.84%, avg=57664.00, stdev=6489.36, samples=3 00:10:57.601 iops : min=12610, max=15750, avg=14416.00, stdev=1622.34, samples=3 00:10:57.601 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:57.601 lat (msec) : 2=0.07%, 4=38.50%, 10=61.32%, 20=0.07% 00:10:57.601 cpu : usr=98.65%, sys=0.35%, ctx=2, majf=0, minf=604 00:10:57.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:57.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.601 issued rwts: total=28866,28893,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.601 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.601 00:10:57.601 Run status group 0 (all jobs): 00:10:57.601 READ: bw=56.4MiB/s (59.1MB/s), 56.4MiB/s-56.4MiB/s (59.1MB/s-59.1MB/s), io=113MiB (118MB), run=2001-2001msec 00:10:57.601 WRITE: bw=56.4MiB/s (59.1MB/s), 56.4MiB/s-56.4MiB/s (59.1MB/s-59.1MB/s), io=113MiB (118MB), run=2001-2001msec 00:10:57.601 ----------------------------------------------------- 00:10:57.601 Suppressions used: 00:10:57.601 count bytes template 00:10:57.601 1 32 /usr/src/fio/parse.c 00:10:57.601 1 8 libtcmalloc_minimal.so 00:10:57.601 ----------------------------------------------------- 00:10:57.601 00:10:57.601 15:19:11 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:57.601 15:19:11 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:57.601 00:10:57.601 real 0m16.299s 00:10:57.601 user 0m12.886s 00:10:57.601 sys 0m2.297s 00:10:57.601 ************************************ 00:10:57.601 END TEST nvme_fio 00:10:57.601 ************************************ 00:10:57.601 15:19:11 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:57.601 15:19:11 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:57.601 15:19:11 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:57.601 00:10:57.601 real 1m29.529s 00:10:57.601 user 3m42.715s 00:10:57.601 sys 0m14.154s 00:10:57.601 15:19:11 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:57.601 ************************************ 00:10:57.601 END TEST nvme 00:10:57.601 15:19:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:57.601 ************************************ 00:10:57.601 15:19:11 -- common/autotest_common.sh@1142 -- # return 0 00:10:57.601 15:19:11 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:10:57.601 15:19:11 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:57.601 15:19:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:57.601 15:19:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:57.601 15:19:11 -- common/autotest_common.sh@10 -- # set +x 00:10:57.601 ************************************ 00:10:57.601 START TEST nvme_scc 00:10:57.601 ************************************ 00:10:57.601 15:19:11 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:57.601 * Looking for test storage... 00:10:57.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:57.601 15:19:11 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:57.601 15:19:11 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:57.601 15:19:11 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:57.601 15:19:11 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:57.601 15:19:11 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:57.601 15:19:11 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.601 15:19:11 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.601 15:19:11 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.601 15:19:11 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.601 15:19:11 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.601 15:19:11 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.601 15:19:11 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:57.601 15:19:11 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.601 15:19:11 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:57.601 15:19:11 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:57.601 15:19:11 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:57.601 15:19:11 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:57.601 15:19:11 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:57.601 15:19:11 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:57.601 15:19:11 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:57.601 15:19:11 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:57.601 15:19:11 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:57.860 15:19:11 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:57.860 15:19:11 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:57.860 15:19:11 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:57.860 15:19:11 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:57.860 15:19:11 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:58.118 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:58.118 Waiting for block devices as requested 00:10:58.377 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:58.377 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:58.377 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:58.635 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:03.937 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:03.937 15:19:17 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:03.937 15:19:17 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:03.937 15:19:17 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:03.937 15:19:17 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:03.937 15:19:17 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:03.937 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:03.938 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:03.939 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:03.940 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:03.941 15:19:17 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:03.941 15:19:17 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:03.941 15:19:17 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:03.942 15:19:17 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:03.942 15:19:17 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.942 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:03.943 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.944 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:03.945 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:03.946 15:19:17 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:03.946 15:19:17 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:03.946 15:19:17 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:03.946 15:19:17 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:03.946 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:03.947 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:03.948 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:03.949 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.950 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.951 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.952 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:03.953 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:03.954 15:19:17 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:03.954 15:19:17 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:03.954 15:19:17 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:03.954 15:19:17 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:03.954 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:03.955 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:03.956 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.215 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:04.216 15:19:17 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:04.216 15:19:17 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:11:04.217 15:19:17 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:04.217 15:19:17 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:04.217 15:19:17 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:04.217 15:19:17 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:04.217 15:19:17 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:04.217 15:19:17 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:04.217 15:19:17 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:04.217 15:19:17 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:04.217 15:19:17 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:04.217 15:19:17 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:04.217 15:19:17 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:04.217 15:19:17 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:04.217 15:19:17 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:04.217 15:19:17 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:11:04.217 15:19:17 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:04.217 15:19:17 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:11:04.217 15:19:17 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:11:04.217 15:19:17 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:04.217 15:19:17 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:04.217 15:19:17 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:04.782 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:05.347 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:05.347 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:05.347 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:05.347 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:05.347 15:19:18 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:05.347 15:19:18 nvme_scc -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:05.347 15:19:18 nvme_scc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.347 15:19:18 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:05.347 ************************************ 00:11:05.347 START TEST nvme_simple_copy 00:11:05.347 ************************************ 00:11:05.347 15:19:18 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:05.604 Initializing NVMe Controllers 00:11:05.604 Attaching to 0000:00:10.0 00:11:05.604 Controller supports SCC. Attached to 0000:00:10.0 00:11:05.604 Namespace ID: 1 size: 6GB 00:11:05.604 Initialization complete. 00:11:05.604 00:11:05.604 Controller QEMU NVMe Ctrl (12340 ) 00:11:05.604 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:05.604 Namespace Block Size:4096 00:11:05.604 Writing LBAs 0 to 63 with Random Data 00:11:05.604 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:05.604 LBAs matching Written Data: 64 00:11:05.604 00:11:05.604 real 0m0.329s 00:11:05.604 user 0m0.135s 00:11:05.604 sys 0m0.091s 00:11:05.604 15:19:19 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:05.604 15:19:19 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:05.604 ************************************ 00:11:05.604 END TEST nvme_simple_copy 00:11:05.604 ************************************ 00:11:05.860 15:19:19 nvme_scc -- common/autotest_common.sh@1142 -- # return 0 00:11:05.860 00:11:05.860 real 0m8.102s 00:11:05.860 user 0m1.396s 00:11:05.860 sys 0m1.641s 00:11:05.860 15:19:19 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:05.860 ************************************ 00:11:05.860 END TEST nvme_scc 00:11:05.860 15:19:19 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:05.860 ************************************ 00:11:05.860 15:19:19 -- common/autotest_common.sh@1142 -- # return 0 00:11:05.860 15:19:19 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:11:05.860 15:19:19 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:11:05.860 15:19:19 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:11:05.860 15:19:19 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:11:05.860 15:19:19 -- spdk/autotest.sh@233 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:05.860 15:19:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:05.860 15:19:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.860 15:19:19 -- common/autotest_common.sh@10 -- # set +x 00:11:05.860 ************************************ 00:11:05.860 START TEST nvme_fdp 00:11:05.860 ************************************ 00:11:05.860 15:19:19 nvme_fdp -- common/autotest_common.sh@1123 -- # test/nvme/nvme_fdp.sh 00:11:05.860 * Looking for test storage... 00:11:05.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:05.860 15:19:19 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:05.860 15:19:19 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:05.860 15:19:19 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:05.860 15:19:19 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:05.860 15:19:19 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:05.860 15:19:19 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.860 15:19:19 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.860 15:19:19 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.860 15:19:19 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.860 15:19:19 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.861 15:19:19 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.861 15:19:19 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:05.861 15:19:19 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.861 15:19:19 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:05.861 15:19:19 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:05.861 15:19:19 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:05.861 15:19:19 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:05.861 15:19:19 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:05.861 15:19:19 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:05.861 15:19:19 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:05.861 15:19:19 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:05.861 15:19:19 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:05.861 15:19:19 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:05.861 15:19:19 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:06.117 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:06.374 Waiting for block devices as requested 00:11:06.374 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:06.630 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:06.630 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:06.630 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:11.906 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:11.906 15:19:25 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:11.906 15:19:25 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:11.906 15:19:25 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:11.906 15:19:25 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:11.906 15:19:25 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:11.906 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:11.907 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.908 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:11.909 15:19:25 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:11.909 15:19:25 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:11.909 15:19:25 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:11.909 15:19:25 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.909 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.910 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.911 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:11.912 15:19:25 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:11.912 15:19:25 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:11.912 15:19:25 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:11.912 15:19:25 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.912 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:11.913 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:11.914 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.176 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:12.177 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.178 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:12.179 15:19:25 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:12.179 15:19:25 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:12.179 15:19:25 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:12.179 15:19:25 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:12.179 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:12.180 15:19:25 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:11:12.180 15:19:25 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:11:12.180 15:19:25 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:12.180 15:19:25 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:12.180 15:19:25 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:12.746 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:13.313 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:13.313 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:13.313 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:13.313 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:13.313 15:19:26 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:13.313 15:19:26 nvme_fdp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:13.313 15:19:26 nvme_fdp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:13.313 15:19:26 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:13.313 ************************************ 00:11:13.313 START TEST nvme_flexible_data_placement 00:11:13.313 ************************************ 00:11:13.313 15:19:26 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:13.880 Initializing NVMe Controllers 00:11:13.880 Attaching to 0000:00:13.0 00:11:13.880 Controller supports FDP Attached to 0000:00:13.0 00:11:13.880 Namespace ID: 1 Endurance Group ID: 1 00:11:13.880 Initialization complete. 00:11:13.880 00:11:13.880 ================================== 00:11:13.880 == FDP tests for Namespace: #01 == 00:11:13.880 ================================== 00:11:13.880 00:11:13.880 Get Feature: FDP: 00:11:13.880 ================= 00:11:13.880 Enabled: Yes 00:11:13.880 FDP configuration Index: 0 00:11:13.880 00:11:13.880 FDP configurations log page 00:11:13.880 =========================== 00:11:13.880 Number of FDP configurations: 1 00:11:13.880 Version: 0 00:11:13.880 Size: 112 00:11:13.880 FDP Configuration Descriptor: 0 00:11:13.880 Descriptor Size: 96 00:11:13.880 Reclaim Group Identifier format: 2 00:11:13.880 FDP Volatile Write Cache: Not Present 00:11:13.880 FDP Configuration: Valid 00:11:13.880 Vendor Specific Size: 0 00:11:13.880 Number of Reclaim Groups: 2 00:11:13.880 Number of Recalim Unit Handles: 8 00:11:13.880 Max Placement Identifiers: 128 00:11:13.880 Number of Namespaces Suppprted: 256 00:11:13.880 Reclaim unit Nominal Size: 6000000 bytes 00:11:13.880 Estimated Reclaim Unit Time Limit: Not Reported 00:11:13.880 RUH Desc #000: RUH Type: Initially Isolated 00:11:13.880 RUH Desc #001: RUH Type: Initially Isolated 00:11:13.880 RUH Desc #002: RUH Type: Initially Isolated 00:11:13.880 RUH Desc #003: RUH Type: Initially Isolated 00:11:13.880 RUH Desc #004: RUH Type: Initially Isolated 00:11:13.880 RUH Desc #005: RUH Type: Initially Isolated 00:11:13.880 RUH Desc #006: RUH Type: Initially Isolated 00:11:13.880 RUH Desc #007: RUH Type: Initially Isolated 00:11:13.880 00:11:13.880 FDP reclaim unit handle usage log page 00:11:13.880 ====================================== 00:11:13.880 Number of Reclaim Unit Handles: 8 00:11:13.880 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:13.880 RUH Usage Desc #001: RUH Attributes: Unused 00:11:13.880 RUH Usage Desc #002: RUH Attributes: Unused 00:11:13.880 RUH Usage Desc #003: RUH Attributes: Unused 00:11:13.880 RUH Usage Desc #004: RUH Attributes: Unused 00:11:13.880 RUH Usage Desc #005: RUH Attributes: Unused 00:11:13.880 RUH Usage Desc #006: RUH Attributes: Unused 00:11:13.880 RUH Usage Desc #007: RUH Attributes: Unused 00:11:13.880 00:11:13.880 FDP statistics log page 00:11:13.880 ======================= 00:11:13.880 Host bytes with metadata written: 841252864 00:11:13.880 Media bytes with metadata written: 841506816 00:11:13.880 Media bytes erased: 0 00:11:13.880 00:11:13.880 FDP Reclaim unit handle status 00:11:13.880 ============================== 00:11:13.880 Number of RUHS descriptors: 2 00:11:13.880 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003db8 00:11:13.880 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:13.880 00:11:13.880 FDP write on placement id: 0 success 00:11:13.880 00:11:13.880 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:13.880 00:11:13.880 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:13.880 00:11:13.880 Get Feature: FDP Events for Placement handle: #0 00:11:13.880 ======================== 00:11:13.880 Number of FDP Events: 6 00:11:13.880 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:13.880 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:13.880 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:13.880 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:13.880 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:13.880 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:13.880 00:11:13.880 FDP events log page 00:11:13.880 =================== 00:11:13.880 Number of FDP events: 1 00:11:13.880 FDP Event #0: 00:11:13.880 Event Type: RU Not Written to Capacity 00:11:13.880 Placement Identifier: Valid 00:11:13.880 NSID: Valid 00:11:13.880 Location: Valid 00:11:13.880 Placement Identifier: 0 00:11:13.880 Event Timestamp: 8 00:11:13.880 Namespace Identifier: 1 00:11:13.880 Reclaim Group Identifier: 0 00:11:13.880 Reclaim Unit Handle Identifier: 0 00:11:13.880 00:11:13.880 FDP test passed 00:11:13.880 00:11:13.880 real 0m0.284s 00:11:13.880 user 0m0.099s 00:11:13.880 sys 0m0.083s 00:11:13.880 15:19:27 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:13.880 ************************************ 00:11:13.880 END TEST nvme_flexible_data_placement 00:11:13.880 15:19:27 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:13.880 ************************************ 00:11:13.880 15:19:27 nvme_fdp -- common/autotest_common.sh@1142 -- # return 0 00:11:13.880 00:11:13.880 real 0m7.968s 00:11:13.880 user 0m1.295s 00:11:13.880 sys 0m1.649s 00:11:13.880 15:19:27 nvme_fdp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:13.880 15:19:27 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:13.880 ************************************ 00:11:13.880 END TEST nvme_fdp 00:11:13.880 ************************************ 00:11:13.880 15:19:27 -- common/autotest_common.sh@1142 -- # return 0 00:11:13.880 15:19:27 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:11:13.880 15:19:27 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:13.880 15:19:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:13.880 15:19:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:13.880 15:19:27 -- common/autotest_common.sh@10 -- # set +x 00:11:13.880 ************************************ 00:11:13.880 START TEST nvme_rpc 00:11:13.881 ************************************ 00:11:13.881 15:19:27 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:13.881 * Looking for test storage... 00:11:13.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:13.881 15:19:27 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:13.881 15:19:27 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:13.881 15:19:27 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:11:13.881 15:19:27 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:11:13.881 15:19:27 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:11:13.881 15:19:27 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:11:13.881 15:19:27 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:13.881 15:19:27 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:11:13.881 15:19:27 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:13.881 15:19:27 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:13.881 15:19:27 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:13.881 15:19:27 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:13.881 15:19:27 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:13.881 15:19:27 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:11:13.881 15:19:27 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:13.881 15:19:27 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=72030 00:11:13.881 15:19:27 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:13.881 15:19:27 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:13.881 15:19:27 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 72030 00:11:13.881 15:19:27 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 72030 ']' 00:11:13.881 15:19:27 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.881 15:19:27 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:13.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.881 15:19:27 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.881 15:19:27 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:13.881 15:19:27 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.139 [2024-07-11 15:19:27.568418] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:14.139 [2024-07-11 15:19:27.568618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72030 ] 00:11:14.139 [2024-07-11 15:19:27.741045] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:14.397 [2024-07-11 15:19:27.934910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.397 [2024-07-11 15:19:27.934918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.330 15:19:28 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:15.330 15:19:28 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:15.330 15:19:28 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:15.330 Nvme0n1 00:11:15.330 15:19:28 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:15.330 15:19:28 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:15.588 request: 00:11:15.588 { 00:11:15.588 "bdev_name": "Nvme0n1", 00:11:15.588 "filename": "non_existing_file", 00:11:15.588 "method": "bdev_nvme_apply_firmware", 00:11:15.588 "req_id": 1 00:11:15.588 } 00:11:15.588 Got JSON-RPC error response 00:11:15.588 response: 00:11:15.588 { 00:11:15.588 "code": -32603, 00:11:15.588 "message": "open file failed." 00:11:15.588 } 00:11:15.845 15:19:29 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:15.845 15:19:29 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:15.845 15:19:29 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:16.103 15:19:29 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:16.103 15:19:29 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 72030 00:11:16.103 15:19:29 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 72030 ']' 00:11:16.103 15:19:29 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 72030 00:11:16.103 15:19:29 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:11:16.103 15:19:29 nvme_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:16.103 15:19:29 nvme_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72030 00:11:16.103 15:19:29 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:16.103 15:19:29 nvme_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:16.103 killing process with pid 72030 00:11:16.103 15:19:29 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72030' 00:11:16.103 15:19:29 nvme_rpc -- common/autotest_common.sh@967 -- # kill 72030 00:11:16.103 15:19:29 nvme_rpc -- common/autotest_common.sh@972 -- # wait 72030 00:11:18.002 00:11:18.002 real 0m4.119s 00:11:18.002 user 0m7.824s 00:11:18.002 sys 0m0.559s 00:11:18.002 15:19:31 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:18.002 15:19:31 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.002 ************************************ 00:11:18.002 END TEST nvme_rpc 00:11:18.002 ************************************ 00:11:18.002 15:19:31 -- common/autotest_common.sh@1142 -- # return 0 00:11:18.002 15:19:31 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:18.002 15:19:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:18.002 15:19:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:18.002 15:19:31 -- common/autotest_common.sh@10 -- # set +x 00:11:18.002 ************************************ 00:11:18.002 START TEST nvme_rpc_timeouts 00:11:18.002 ************************************ 00:11:18.002 15:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:18.002 * Looking for test storage... 00:11:18.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:18.002 15:19:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:18.002 15:19:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_72102 00:11:18.002 15:19:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_72102 00:11:18.002 15:19:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=72126 00:11:18.002 15:19:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:18.002 15:19:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:18.002 15:19:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 72126 00:11:18.002 15:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 72126 ']' 00:11:18.002 15:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.002 15:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:18.002 15:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.002 15:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:18.002 15:19:31 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:18.260 [2024-07-11 15:19:31.677523] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:18.260 [2024-07-11 15:19:31.677721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72126 ] 00:11:18.260 [2024-07-11 15:19:31.851625] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:18.518 [2024-07-11 15:19:32.027137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.518 [2024-07-11 15:19:32.027152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.082 15:19:32 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:19.082 15:19:32 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:11:19.082 Checking default timeout settings: 00:11:19.082 15:19:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:19.082 15:19:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:19.647 Making settings changes with rpc: 00:11:19.647 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:19.647 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:19.925 Check default vs. modified settings: 00:11:19.925 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:19.925 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_72102 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_72102 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:20.200 Setting action_on_timeout is changed as expected. 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_72102 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_72102 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:20.200 Setting timeout_us is changed as expected. 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_72102 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_72102 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:20.200 Setting timeout_admin_us is changed as expected. 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_72102 /tmp/settings_modified_72102 00:11:20.200 15:19:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 72126 00:11:20.200 15:19:33 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 72126 ']' 00:11:20.200 15:19:33 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 72126 00:11:20.200 15:19:33 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:11:20.200 15:19:33 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:20.200 15:19:33 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72126 00:11:20.200 15:19:33 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:20.201 15:19:33 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:20.201 killing process with pid 72126 00:11:20.201 15:19:33 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72126' 00:11:20.201 15:19:33 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 72126 00:11:20.201 15:19:33 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 72126 00:11:22.099 RPC TIMEOUT SETTING TEST PASSED. 00:11:22.099 15:19:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:22.099 00:11:22.099 real 0m4.164s 00:11:22.099 user 0m8.085s 00:11:22.099 sys 0m0.585s 00:11:22.099 15:19:35 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:22.099 15:19:35 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:22.099 ************************************ 00:11:22.099 END TEST nvme_rpc_timeouts 00:11:22.099 ************************************ 00:11:22.099 15:19:35 -- common/autotest_common.sh@1142 -- # return 0 00:11:22.099 15:19:35 -- spdk/autotest.sh@243 -- # uname -s 00:11:22.099 15:19:35 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:11:22.099 15:19:35 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:22.099 15:19:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:22.099 15:19:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.099 15:19:35 -- common/autotest_common.sh@10 -- # set +x 00:11:22.099 ************************************ 00:11:22.099 START TEST sw_hotplug 00:11:22.099 ************************************ 00:11:22.099 15:19:35 sw_hotplug -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:22.358 * Looking for test storage... 00:11:22.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:22.358 15:19:35 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:22.615 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:22.873 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:22.873 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:22.873 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:22.873 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:22.873 15:19:36 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:11:22.873 15:19:36 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:11:22.873 15:19:36 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:11:22.873 15:19:36 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@230 -- # local class 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:11:22.873 15:19:36 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:22.873 15:19:36 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:11:22.873 15:19:36 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:22.873 15:19:36 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:23.131 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:23.389 Waiting for block devices as requested 00:11:23.389 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:23.646 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:23.646 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:23.646 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:28.910 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:28.910 15:19:42 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:11:28.910 15:19:42 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:29.168 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:11:29.427 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:29.427 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:11:29.684 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:11:29.941 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:29.941 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:29.941 15:19:43 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:11:29.941 15:19:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.941 15:19:43 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:11:29.941 15:19:43 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:29.941 15:19:43 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=72981 00:11:29.941 15:19:43 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:11:29.941 15:19:43 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:29.941 15:19:43 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:29.941 15:19:43 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:11:29.941 15:19:43 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:11:29.941 15:19:43 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:11:29.941 15:19:43 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:11:29.941 15:19:43 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:11:29.941 15:19:43 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 false 00:11:29.941 15:19:43 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:29.941 15:19:43 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:29.941 15:19:43 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:11:29.941 15:19:43 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:29.941 15:19:43 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:30.199 Initializing NVMe Controllers 00:11:30.199 Attaching to 0000:00:10.0 00:11:30.199 Attaching to 0000:00:11.0 00:11:30.199 Attached to 0000:00:10.0 00:11:30.199 Attached to 0000:00:11.0 00:11:30.199 Initialization complete. Starting I/O... 00:11:30.199 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:30.199 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:30.199 00:11:31.575 QEMU NVMe Ctrl (12340 ): 1131 I/Os completed (+1131) 00:11:31.575 QEMU NVMe Ctrl (12341 ): 1149 I/Os completed (+1149) 00:11:31.575 00:11:32.510 QEMU NVMe Ctrl (12340 ): 2601 I/Os completed (+1470) 00:11:32.510 QEMU NVMe Ctrl (12341 ): 2621 I/Os completed (+1472) 00:11:32.510 00:11:33.446 QEMU NVMe Ctrl (12340 ): 4293 I/Os completed (+1692) 00:11:33.446 QEMU NVMe Ctrl (12341 ): 4341 I/Os completed (+1720) 00:11:33.446 00:11:34.380 QEMU NVMe Ctrl (12340 ): 6033 I/Os completed (+1740) 00:11:34.380 QEMU NVMe Ctrl (12341 ): 6103 I/Os completed (+1762) 00:11:34.380 00:11:35.314 QEMU NVMe Ctrl (12340 ): 7717 I/Os completed (+1684) 00:11:35.314 QEMU NVMe Ctrl (12341 ): 7835 I/Os completed (+1732) 00:11:35.314 00:11:36.265 15:19:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:36.265 15:19:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.265 15:19:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.265 [2024-07-11 15:19:49.552486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:36.265 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:36.265 [2024-07-11 15:19:49.554476] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.265 [2024-07-11 15:19:49.554572] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.265 [2024-07-11 15:19:49.554601] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.265 [2024-07-11 15:19:49.554625] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.265 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:36.265 [2024-07-11 15:19:49.557678] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.265 [2024-07-11 15:19:49.557762] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.265 [2024-07-11 15:19:49.557787] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.265 [2024-07-11 15:19:49.557809] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.265 15:19:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.265 15:19:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.265 [2024-07-11 15:19:49.585866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:36.265 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:36.265 [2024-07-11 15:19:49.587761] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.265 [2024-07-11 15:19:49.587817] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.265 [2024-07-11 15:19:49.587850] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.265 [2024-07-11 15:19:49.587891] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.265 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:36.265 [2024-07-11 15:19:49.590492] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.265 [2024-07-11 15:19:49.590539] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.265 [2024-07-11 15:19:49.590567] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.265 [2024-07-11 15:19:49.590587] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.265 15:19:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:36.265 15:19:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:36.265 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:36.265 EAL: Scan for (pci) bus failed. 00:11:36.265 15:19:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.265 15:19:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.265 15:19:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:36.265 15:19:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:36.265 00:11:36.265 15:19:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.265 15:19:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.265 15:19:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.265 15:19:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:36.265 Attaching to 0000:00:10.0 00:11:36.265 Attached to 0000:00:10.0 00:11:36.534 15:19:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:36.534 15:19:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.534 15:19:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:36.534 Attaching to 0000:00:11.0 00:11:36.534 Attached to 0000:00:11.0 00:11:37.469 QEMU NVMe Ctrl (12340 ): 1896 I/Os completed (+1896) 00:11:37.469 QEMU NVMe Ctrl (12341 ): 1748 I/Os completed (+1748) 00:11:37.469 00:11:38.404 QEMU NVMe Ctrl (12340 ): 3860 I/Os completed (+1964) 00:11:38.404 QEMU NVMe Ctrl (12341 ): 3755 I/Os completed (+2007) 00:11:38.404 00:11:39.340 QEMU NVMe Ctrl (12340 ): 5736 I/Os completed (+1876) 00:11:39.340 QEMU NVMe Ctrl (12341 ): 5695 I/Os completed (+1940) 00:11:39.340 00:11:40.273 QEMU NVMe Ctrl (12340 ): 7648 I/Os completed (+1912) 00:11:40.273 QEMU NVMe Ctrl (12341 ): 7631 I/Os completed (+1936) 00:11:40.273 00:11:41.206 QEMU NVMe Ctrl (12340 ): 9558 I/Os completed (+1910) 00:11:41.206 QEMU NVMe Ctrl (12341 ): 9608 I/Os completed (+1977) 00:11:41.206 00:11:42.577 QEMU NVMe Ctrl (12340 ): 11444 I/Os completed (+1886) 00:11:42.577 QEMU NVMe Ctrl (12341 ): 11553 I/Os completed (+1945) 00:11:42.577 00:11:43.510 QEMU NVMe Ctrl (12340 ): 13368 I/Os completed (+1924) 00:11:43.510 QEMU NVMe Ctrl (12341 ): 13498 I/Os completed (+1945) 00:11:43.510 00:11:44.444 QEMU NVMe Ctrl (12340 ): 15296 I/Os completed (+1928) 00:11:44.444 QEMU NVMe Ctrl (12341 ): 15462 I/Os completed (+1964) 00:11:44.444 00:11:45.381 QEMU NVMe Ctrl (12340 ): 16872 I/Os completed (+1576) 00:11:45.381 QEMU NVMe Ctrl (12341 ): 17173 I/Os completed (+1711) 00:11:45.381 00:11:46.315 QEMU NVMe Ctrl (12340 ): 18500 I/Os completed (+1628) 00:11:46.315 QEMU NVMe Ctrl (12341 ): 18877 I/Os completed (+1704) 00:11:46.315 00:11:47.250 QEMU NVMe Ctrl (12340 ): 19980 I/Os completed (+1480) 00:11:47.250 QEMU NVMe Ctrl (12341 ): 20532 I/Os completed (+1655) 00:11:47.250 00:11:48.185 QEMU NVMe Ctrl (12340 ): 21600 I/Os completed (+1620) 00:11:48.185 QEMU NVMe Ctrl (12341 ): 22241 I/Os completed (+1709) 00:11:48.185 00:11:48.444 15:20:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:48.444 15:20:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:48.444 15:20:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:48.444 15:20:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:48.444 [2024-07-11 15:20:01.900886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:48.444 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:48.444 [2024-07-11 15:20:01.903285] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.444 [2024-07-11 15:20:01.903366] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.444 [2024-07-11 15:20:01.903400] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.444 [2024-07-11 15:20:01.903429] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.444 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:48.444 [2024-07-11 15:20:01.906870] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.444 [2024-07-11 15:20:01.906959] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.444 [2024-07-11 15:20:01.906988] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.444 [2024-07-11 15:20:01.907014] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.444 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:11:48.444 EAL: Scan for (pci) bus failed. 00:11:48.444 15:20:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:48.444 15:20:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:48.444 [2024-07-11 15:20:01.933546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:48.444 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:48.444 [2024-07-11 15:20:01.935858] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.444 [2024-07-11 15:20:01.936115] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.444 [2024-07-11 15:20:01.936304] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.444 [2024-07-11 15:20:01.936383] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.444 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:48.444 [2024-07-11 15:20:01.939664] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.444 [2024-07-11 15:20:01.939872] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.444 [2024-07-11 15:20:01.940049] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.444 [2024-07-11 15:20:01.940220] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.444 15:20:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:48.444 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:48.444 15:20:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:48.444 EAL: Scan for (pci) bus failed. 00:11:48.444 15:20:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:48.444 15:20:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:48.444 15:20:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:48.712 15:20:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:48.712 15:20:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:48.713 15:20:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:48.713 15:20:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:48.713 15:20:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:48.713 Attaching to 0000:00:10.0 00:11:48.713 Attached to 0000:00:10.0 00:11:48.713 15:20:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:48.713 15:20:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:48.713 15:20:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:48.713 Attaching to 0000:00:11.0 00:11:48.713 Attached to 0000:00:11.0 00:11:49.293 QEMU NVMe Ctrl (12340 ): 1004 I/Os completed (+1004) 00:11:49.293 QEMU NVMe Ctrl (12341 ): 895 I/Os completed (+895) 00:11:49.293 00:11:50.225 QEMU NVMe Ctrl (12340 ): 2424 I/Os completed (+1420) 00:11:50.225 QEMU NVMe Ctrl (12341 ): 2456 I/Os completed (+1561) 00:11:50.225 00:11:51.601 QEMU NVMe Ctrl (12340 ): 3904 I/Os completed (+1480) 00:11:51.601 QEMU NVMe Ctrl (12341 ): 4030 I/Os completed (+1574) 00:11:51.601 00:11:52.536 QEMU NVMe Ctrl (12340 ): 5432 I/Os completed (+1528) 00:11:52.536 QEMU NVMe Ctrl (12341 ): 5686 I/Os completed (+1656) 00:11:52.536 00:11:53.473 QEMU NVMe Ctrl (12340 ): 6934 I/Os completed (+1502) 00:11:53.473 QEMU NVMe Ctrl (12341 ): 7244 I/Os completed (+1558) 00:11:53.473 00:11:54.409 QEMU NVMe Ctrl (12340 ): 8445 I/Os completed (+1511) 00:11:54.409 QEMU NVMe Ctrl (12341 ): 8914 I/Os completed (+1670) 00:11:54.409 00:11:55.346 QEMU NVMe Ctrl (12340 ): 10117 I/Os completed (+1672) 00:11:55.346 QEMU NVMe Ctrl (12341 ): 10673 I/Os completed (+1759) 00:11:55.346 00:11:56.281 QEMU NVMe Ctrl (12340 ): 11929 I/Os completed (+1812) 00:11:56.281 QEMU NVMe Ctrl (12341 ): 12554 I/Os completed (+1881) 00:11:56.281 00:11:57.217 QEMU NVMe Ctrl (12340 ): 13839 I/Os completed (+1910) 00:11:57.217 QEMU NVMe Ctrl (12341 ): 14530 I/Os completed (+1976) 00:11:57.217 00:11:58.593 QEMU NVMe Ctrl (12340 ): 15775 I/Os completed (+1936) 00:11:58.593 QEMU NVMe Ctrl (12341 ): 16490 I/Os completed (+1960) 00:11:58.593 00:11:59.528 QEMU NVMe Ctrl (12340 ): 17616 I/Os completed (+1841) 00:11:59.528 QEMU NVMe Ctrl (12341 ): 18386 I/Os completed (+1896) 00:11:59.528 00:12:00.463 QEMU NVMe Ctrl (12340 ): 19440 I/Os completed (+1824) 00:12:00.463 QEMU NVMe Ctrl (12341 ): 20245 I/Os completed (+1859) 00:12:00.463 00:12:00.722 15:20:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:00.722 15:20:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:00.722 15:20:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.722 15:20:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.722 [2024-07-11 15:20:14.250350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:00.722 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:00.722 [2024-07-11 15:20:14.252394] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.722 [2024-07-11 15:20:14.252663] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.722 [2024-07-11 15:20:14.252717] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.722 [2024-07-11 15:20:14.252743] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.722 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:00.722 [2024-07-11 15:20:14.255725] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.722 [2024-07-11 15:20:14.255798] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.722 [2024-07-11 15:20:14.255823] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.722 [2024-07-11 15:20:14.255844] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.722 15:20:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.722 15:20:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.722 [2024-07-11 15:20:14.280647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:00.722 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:00.722 [2024-07-11 15:20:14.284867] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.723 [2024-07-11 15:20:14.285115] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.723 [2024-07-11 15:20:14.285281] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.723 [2024-07-11 15:20:14.285353] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.723 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:00.723 [2024-07-11 15:20:14.288084] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.723 [2024-07-11 15:20:14.288161] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.723 [2024-07-11 15:20:14.288192] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.723 [2024-07-11 15:20:14.288212] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.723 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:00.723 EAL: Scan for (pci) bus failed. 00:12:00.723 15:20:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:00.723 15:20:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:00.981 15:20:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:00.981 15:20:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:00.981 15:20:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:00.981 15:20:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:00.981 15:20:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:00.981 15:20:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:00.981 15:20:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:00.981 15:20:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:00.981 Attaching to 0000:00:10.0 00:12:00.981 Attached to 0000:00:10.0 00:12:00.981 15:20:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:00.981 15:20:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:00.981 15:20:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:00.981 Attaching to 0000:00:11.0 00:12:00.981 Attached to 0000:00:11.0 00:12:00.981 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:00.981 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:00.981 [2024-07-11 15:20:14.570263] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:13.196 15:20:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:13.196 15:20:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:13.196 15:20:26 sw_hotplug -- common/autotest_common.sh@715 -- # time=43.01 00:12:13.196 15:20:26 sw_hotplug -- common/autotest_common.sh@716 -- # echo 43.01 00:12:13.196 15:20:26 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:12:13.196 15:20:26 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.01 00:12:13.196 15:20:26 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.01 2 00:12:13.196 remove_attach_helper took 43.01s to complete (handling 2 nvme drive(s)) 15:20:26 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:12:19.745 15:20:32 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 72981 00:12:19.745 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (72981) - No such process 00:12:19.745 15:20:32 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 72981 00:12:19.745 15:20:32 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:12:19.745 15:20:32 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:12:19.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.745 15:20:32 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:12:19.745 15:20:32 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=73516 00:12:19.745 15:20:32 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:19.745 15:20:32 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:19.745 15:20:32 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 73516 00:12:19.745 15:20:32 sw_hotplug -- common/autotest_common.sh@829 -- # '[' -z 73516 ']' 00:12:19.745 15:20:32 sw_hotplug -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.745 15:20:32 sw_hotplug -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:19.745 15:20:32 sw_hotplug -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.745 15:20:32 sw_hotplug -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:19.745 15:20:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:19.745 [2024-07-11 15:20:32.718473] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:19.745 [2024-07-11 15:20:32.718972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73516 ] 00:12:19.745 [2024-07-11 15:20:32.897266] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.745 [2024-07-11 15:20:33.094752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.309 15:20:33 sw_hotplug -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:20.309 15:20:33 sw_hotplug -- common/autotest_common.sh@862 -- # return 0 00:12:20.309 15:20:33 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:20.309 15:20:33 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.309 15:20:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:20.309 15:20:33 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.309 15:20:33 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:12:20.309 15:20:33 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:20.309 15:20:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:20.309 15:20:33 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:12:20.309 15:20:33 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:12:20.309 15:20:33 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:12:20.309 15:20:33 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:12:20.309 15:20:33 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:12:20.309 15:20:33 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:20.309 15:20:33 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:20.309 15:20:33 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:20.309 15:20:33 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:20.310 15:20:33 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:26.867 15:20:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:26.867 15:20:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:26.867 15:20:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:26.867 15:20:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:26.867 15:20:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:26.867 15:20:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:26.867 15:20:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:26.867 15:20:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:26.867 15:20:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:26.867 15:20:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:26.867 15:20:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:26.867 15:20:39 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.867 15:20:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.867 15:20:39 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.867 [2024-07-11 15:20:39.821927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:26.867 [2024-07-11 15:20:39.824614] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.867 [2024-07-11 15:20:39.824669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.867 [2024-07-11 15:20:39.824710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.867 [2024-07-11 15:20:39.824738] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.867 [2024-07-11 15:20:39.824758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.867 [2024-07-11 15:20:39.824774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.867 [2024-07-11 15:20:39.824791] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.867 [2024-07-11 15:20:39.824805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.867 [2024-07-11 15:20:39.824821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.867 [2024-07-11 15:20:39.824836] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.867 [2024-07-11 15:20:39.824854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.867 [2024-07-11 15:20:39.824868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.867 15:20:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:26.867 15:20:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:26.867 [2024-07-11 15:20:40.221951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:26.867 [2024-07-11 15:20:40.224772] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.867 [2024-07-11 15:20:40.224844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.867 [2024-07-11 15:20:40.224866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.867 [2024-07-11 15:20:40.224893] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.867 [2024-07-11 15:20:40.224909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.867 [2024-07-11 15:20:40.224926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.867 [2024-07-11 15:20:40.224941] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.867 [2024-07-11 15:20:40.224956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.867 [2024-07-11 15:20:40.224970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.867 [2024-07-11 15:20:40.224986] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.867 [2024-07-11 15:20:40.224999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.867 [2024-07-11 15:20:40.225015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.867 15:20:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:26.867 15:20:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:26.867 15:20:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:26.867 15:20:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:26.867 15:20:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:26.867 15:20:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:26.867 15:20:40 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.867 15:20:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.867 15:20:40 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.867 15:20:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:26.867 15:20:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:27.139 15:20:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:27.139 15:20:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:27.139 15:20:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:27.139 15:20:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:27.139 15:20:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:27.139 15:20:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:27.139 15:20:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:27.139 15:20:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:27.139 15:20:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:27.139 15:20:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:27.139 15:20:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:39.345 15:20:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:39.345 15:20:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:39.345 15:20:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:39.345 15:20:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:39.345 15:20:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:39.345 15:20:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:39.345 15:20:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.345 15:20:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:39.345 15:20:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.345 15:20:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:39.345 15:20:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:39.345 15:20:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:39.345 15:20:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:39.345 15:20:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:39.345 15:20:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:39.345 15:20:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:39.345 15:20:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:39.345 15:20:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:39.345 15:20:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:39.345 15:20:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:39.345 15:20:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:39.345 15:20:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.345 15:20:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:39.345 [2024-07-11 15:20:52.822116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:39.345 15:20:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.345 [2024-07-11 15:20:52.824985] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.345 [2024-07-11 15:20:52.825082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.345 [2024-07-11 15:20:52.825110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.345 [2024-07-11 15:20:52.825137] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.345 [2024-07-11 15:20:52.825155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.345 [2024-07-11 15:20:52.825170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.345 [2024-07-11 15:20:52.825187] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.345 [2024-07-11 15:20:52.825202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.345 [2024-07-11 15:20:52.825218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.345 [2024-07-11 15:20:52.825232] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.345 [2024-07-11 15:20:52.825248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.345 [2024-07-11 15:20:52.825262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.345 15:20:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:39.345 15:20:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:39.916 15:20:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:39.916 15:20:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:39.916 15:20:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:39.916 15:20:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:39.916 15:20:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:39.916 15:20:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:39.916 15:20:53 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.916 15:20:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:39.916 15:20:53 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.916 15:20:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:39.916 15:20:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:39.916 [2024-07-11 15:20:53.522152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:39.916 [2024-07-11 15:20:53.524975] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.916 [2024-07-11 15:20:53.525076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.916 [2024-07-11 15:20:53.525100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.916 [2024-07-11 15:20:53.525131] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.916 [2024-07-11 15:20:53.525148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.916 [2024-07-11 15:20:53.525180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.916 [2024-07-11 15:20:53.525195] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.917 [2024-07-11 15:20:53.525211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.917 [2024-07-11 15:20:53.525225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:39.917 [2024-07-11 15:20:53.525242] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.917 [2024-07-11 15:20:53.525255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:39.917 [2024-07-11 15:20:53.525272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.482 15:20:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:40.482 15:20:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:40.482 15:20:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:40.482 15:20:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:40.482 15:20:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:40.482 15:20:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:40.482 15:20:53 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.482 15:20:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:40.482 15:20:53 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.482 15:20:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:40.482 15:20:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:40.482 15:20:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:40.482 15:20:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:40.482 15:20:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:40.740 15:20:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:40.740 15:20:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:40.740 15:20:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:40.740 15:20:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:40.740 15:20:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:40.740 15:20:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:40.740 15:20:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:40.740 15:20:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:52.962 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:52.962 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:52.962 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:52.962 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:52.962 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:52.962 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:52.962 15:21:06 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.962 15:21:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:52.962 15:21:06 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.962 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:52.962 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:52.962 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:52.962 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:52.962 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:52.962 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:52.962 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:52.962 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:52.962 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:52.962 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:52.962 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:52.962 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:52.962 15:21:06 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.962 15:21:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:52.962 15:21:06 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.962 [2024-07-11 15:21:06.422390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:52.962 [2024-07-11 15:21:06.425521] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.962 [2024-07-11 15:21:06.425738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.962 [2024-07-11 15:21:06.425952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.962 [2024-07-11 15:21:06.426222] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.962 [2024-07-11 15:21:06.426376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.962 [2024-07-11 15:21:06.426551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.962 [2024-07-11 15:21:06.426706] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.962 [2024-07-11 15:21:06.426893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.962 [2024-07-11 15:21:06.427105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.962 [2024-07-11 15:21:06.427250] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:52.962 [2024-07-11 15:21:06.427406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:52.962 [2024-07-11 15:21:06.427566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:52.962 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:52.962 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:53.221 [2024-07-11 15:21:06.822380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:53.221 [2024-07-11 15:21:06.824924] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.221 [2024-07-11 15:21:06.825168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.221 [2024-07-11 15:21:06.825319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.221 [2024-07-11 15:21:06.825591] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.221 [2024-07-11 15:21:06.825728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.221 [2024-07-11 15:21:06.825916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.221 [2024-07-11 15:21:06.826088] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.221 [2024-07-11 15:21:06.826283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.221 [2024-07-11 15:21:06.826478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.221 [2024-07-11 15:21:06.826641] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.221 [2024-07-11 15:21:06.826773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.221 [2024-07-11 15:21:06.826935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.479 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:53.479 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:53.479 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:53.479 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:53.479 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:53.479 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:53.479 15:21:06 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.479 15:21:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:53.479 15:21:06 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.479 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:53.479 15:21:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:53.737 15:21:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:53.737 15:21:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:53.737 15:21:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:53.737 15:21:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:53.737 15:21:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:53.737 15:21:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:53.737 15:21:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:53.737 15:21:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:53.737 15:21:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:53.737 15:21:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:53.737 15:21:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:05.941 15:21:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:05.941 15:21:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:05.941 15:21:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:05.941 15:21:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:05.941 15:21:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:05.941 15:21:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:05.941 15:21:19 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.941 15:21:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:05.941 15:21:19 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.941 15:21:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:05.941 15:21:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:05.941 15:21:19 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.63 00:13:05.941 15:21:19 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.63 00:13:05.941 15:21:19 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:13:05.941 15:21:19 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.63 00:13:05.941 15:21:19 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.63 2 00:13:05.941 remove_attach_helper took 45.63s to complete (handling 2 nvme drive(s)) 15:21:19 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:13:05.941 15:21:19 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.941 15:21:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:05.941 15:21:19 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.941 15:21:19 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:05.941 15:21:19 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.941 15:21:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:05.941 15:21:19 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.941 15:21:19 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:13:05.941 15:21:19 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:05.941 15:21:19 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:05.941 15:21:19 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:13:05.941 15:21:19 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:13:05.941 15:21:19 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:13:05.941 15:21:19 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:13:05.941 15:21:19 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:13:05.941 15:21:19 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:05.941 15:21:19 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:05.941 15:21:19 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:05.941 15:21:19 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:05.941 15:21:19 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:12.509 15:21:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:12.509 15:21:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:12.509 15:21:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:12.509 15:21:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:12.509 15:21:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:12.509 15:21:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:12.509 15:21:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:12.509 15:21:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:12.509 15:21:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:12.509 15:21:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:12.509 15:21:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:12.509 15:21:25 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.509 15:21:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:12.509 [2024-07-11 15:21:25.481879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:12.509 15:21:25 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.509 [2024-07-11 15:21:25.484375] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.509 [2024-07-11 15:21:25.484451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.509 [2024-07-11 15:21:25.484474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.509 [2024-07-11 15:21:25.484498] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.509 [2024-07-11 15:21:25.484514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.509 [2024-07-11 15:21:25.484527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.509 [2024-07-11 15:21:25.484542] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.509 [2024-07-11 15:21:25.484555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.509 [2024-07-11 15:21:25.484569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.509 [2024-07-11 15:21:25.484582] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.509 [2024-07-11 15:21:25.484595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.509 [2024-07-11 15:21:25.484607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.509 15:21:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:12.509 15:21:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:12.509 [2024-07-11 15:21:25.981900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:12.509 [2024-07-11 15:21:25.985911] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.509 [2024-07-11 15:21:25.985964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.509 [2024-07-11 15:21:25.985985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.509 [2024-07-11 15:21:25.986011] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.509 [2024-07-11 15:21:25.986037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.509 [2024-07-11 15:21:25.986053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.509 [2024-07-11 15:21:25.986067] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.509 [2024-07-11 15:21:25.986082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.509 [2024-07-11 15:21:25.986094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.509 [2024-07-11 15:21:25.986109] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:12.509 [2024-07-11 15:21:25.986122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.509 [2024-07-11 15:21:25.986136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.509 15:21:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:12.509 15:21:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:12.509 15:21:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:12.509 15:21:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:12.509 15:21:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:12.509 15:21:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:12.509 15:21:26 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.509 15:21:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:12.509 15:21:26 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.509 15:21:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:12.509 15:21:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:12.767 15:21:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:12.767 15:21:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:12.767 15:21:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:12.767 15:21:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:12.767 15:21:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:12.767 15:21:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:12.767 15:21:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:12.767 15:21:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:12.767 15:21:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:13.025 15:21:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:13.025 15:21:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:25.251 15:21:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:25.251 15:21:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:25.251 15:21:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:25.251 15:21:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:25.251 15:21:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:25.251 15:21:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:25.251 15:21:38 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.251 15:21:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:25.251 15:21:38 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.251 15:21:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:25.251 15:21:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:25.251 15:21:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:25.251 15:21:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:25.251 15:21:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:25.251 15:21:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:25.251 [2024-07-11 15:21:38.482060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:25.251 [2024-07-11 15:21:38.484146] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:25.251 [2024-07-11 15:21:38.484329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.251 [2024-07-11 15:21:38.484518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.251 [2024-07-11 15:21:38.484733] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:25.251 [2024-07-11 15:21:38.484905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.251 [2024-07-11 15:21:38.485104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.251 [2024-07-11 15:21:38.485303] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:25.251 [2024-07-11 15:21:38.485422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.251 [2024-07-11 15:21:38.485557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.251 [2024-07-11 15:21:38.485634] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:25.251 [2024-07-11 15:21:38.485686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.251 [2024-07-11 15:21:38.485794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.251 15:21:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:25.251 15:21:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:25.251 15:21:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:25.251 15:21:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:25.251 15:21:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:25.251 15:21:38 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.251 15:21:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:25.251 15:21:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:25.251 15:21:38 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.251 15:21:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:25.251 15:21:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:25.510 [2024-07-11 15:21:38.882065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:25.510 [2024-07-11 15:21:38.884910] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:25.510 [2024-07-11 15:21:38.885163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.510 [2024-07-11 15:21:38.885391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.510 [2024-07-11 15:21:38.885660] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:25.510 [2024-07-11 15:21:38.885916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.510 [2024-07-11 15:21:38.886110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.510 [2024-07-11 15:21:38.886338] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:25.510 [2024-07-11 15:21:38.886552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.510 [2024-07-11 15:21:38.886710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.510 [2024-07-11 15:21:38.886865] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:25.510 [2024-07-11 15:21:38.887086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:25.510 [2024-07-11 15:21:38.887245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:25.510 15:21:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:25.510 15:21:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:25.510 15:21:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:25.510 15:21:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:25.510 15:21:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:25.510 15:21:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:25.510 15:21:39 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.510 15:21:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:25.510 15:21:39 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.510 15:21:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:25.510 15:21:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:25.769 15:21:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:25.769 15:21:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:25.769 15:21:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:25.769 15:21:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:25.769 15:21:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:25.769 15:21:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:25.769 15:21:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:25.769 15:21:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:25.769 15:21:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:25.769 15:21:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:25.769 15:21:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:37.973 15:21:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:37.973 15:21:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:37.973 15:21:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:37.973 15:21:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:37.973 15:21:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:37.973 15:21:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:37.973 15:21:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.973 15:21:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:37.973 15:21:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.973 15:21:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:37.973 15:21:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:37.973 15:21:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:37.973 15:21:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:37.973 15:21:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:37.973 15:21:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:37.973 15:21:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:37.973 15:21:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:37.973 15:21:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:37.973 15:21:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:37.973 15:21:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:37.973 15:21:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:37.973 15:21:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.973 15:21:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:37.973 [2024-07-11 15:21:51.482761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:37.973 [2024-07-11 15:21:51.485476] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:37.973 [2024-07-11 15:21:51.485529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:37.973 [2024-07-11 15:21:51.485555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.973 [2024-07-11 15:21:51.485581] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:37.973 [2024-07-11 15:21:51.485599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:37.973 [2024-07-11 15:21:51.485614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.973 [2024-07-11 15:21:51.485631] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:37.973 [2024-07-11 15:21:51.485645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:37.973 [2024-07-11 15:21:51.485664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.973 [2024-07-11 15:21:51.485680] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:37.973 [2024-07-11 15:21:51.485696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:37.973 [2024-07-11 15:21:51.485710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:37.973 15:21:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.973 15:21:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:37.973 15:21:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:38.540 [2024-07-11 15:21:51.982810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:38.540 [2024-07-11 15:21:51.985385] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.540 [2024-07-11 15:21:51.985440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:38.540 [2024-07-11 15:21:51.985464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.540 [2024-07-11 15:21:51.985490] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.540 [2024-07-11 15:21:51.985504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:38.540 [2024-07-11 15:21:51.985519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.540 [2024-07-11 15:21:51.985532] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.540 [2024-07-11 15:21:51.985546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:38.540 [2024-07-11 15:21:51.985559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.540 [2024-07-11 15:21:51.985574] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:38.540 [2024-07-11 15:21:51.985587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:38.540 [2024-07-11 15:21:51.985603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:38.540 [2024-07-11 15:21:51.985620] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:13:38.540 [2024-07-11 15:21:51.985637] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:13:38.540 [2024-07-11 15:21:51.985649] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:13:38.540 [2024-07-11 15:21:51.985661] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:13:38.540 15:21:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:38.540 15:21:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:38.540 15:21:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:38.540 15:21:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:38.540 15:21:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:38.540 15:21:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:38.540 15:21:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.540 15:21:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:38.540 15:21:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.540 15:21:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:38.540 15:21:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:38.797 15:21:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:38.797 15:21:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:38.797 15:21:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:38.797 15:21:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:38.797 15:21:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:38.797 15:21:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:38.797 15:21:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:38.797 15:21:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:38.797 15:21:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:38.797 15:21:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:38.797 15:21:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:50.996 15:22:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:50.996 15:22:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:50.996 15:22:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:50.996 15:22:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:50.996 15:22:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:50.996 15:22:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:50.996 15:22:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.996 15:22:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:50.996 15:22:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.996 15:22:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:50.996 15:22:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:50.996 15:22:04 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.05 00:13:50.996 15:22:04 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.05 00:13:50.996 15:22:04 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:13:50.996 15:22:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.05 00:13:50.996 15:22:04 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.05 2 00:13:50.996 remove_attach_helper took 45.05s to complete (handling 2 nvme drive(s)) 15:22:04 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:50.996 15:22:04 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 73516 00:13:50.996 15:22:04 sw_hotplug -- common/autotest_common.sh@948 -- # '[' -z 73516 ']' 00:13:50.996 15:22:04 sw_hotplug -- common/autotest_common.sh@952 -- # kill -0 73516 00:13:50.996 15:22:04 sw_hotplug -- common/autotest_common.sh@953 -- # uname 00:13:50.996 15:22:04 sw_hotplug -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:50.996 15:22:04 sw_hotplug -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73516 00:13:50.996 killing process with pid 73516 00:13:50.996 15:22:04 sw_hotplug -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:50.996 15:22:04 sw_hotplug -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:50.996 15:22:04 sw_hotplug -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73516' 00:13:50.996 15:22:04 sw_hotplug -- common/autotest_common.sh@967 -- # kill 73516 00:13:50.996 15:22:04 sw_hotplug -- common/autotest_common.sh@972 -- # wait 73516 00:13:52.897 15:22:06 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:53.155 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:53.724 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:53.724 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:53.724 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:53.724 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:53.983 ************************************ 00:13:53.983 END TEST sw_hotplug 00:13:53.983 ************************************ 00:13:53.983 00:13:53.983 real 2m31.695s 00:13:53.983 user 1m52.102s 00:13:53.983 sys 0m19.328s 00:13:53.983 15:22:07 sw_hotplug -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:53.983 15:22:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:53.983 15:22:07 -- common/autotest_common.sh@1142 -- # return 0 00:13:53.983 15:22:07 -- spdk/autotest.sh@247 -- # [[ 1 -eq 1 ]] 00:13:53.983 15:22:07 -- spdk/autotest.sh@248 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:53.983 15:22:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:53.983 15:22:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:53.983 15:22:07 -- common/autotest_common.sh@10 -- # set +x 00:13:53.983 ************************************ 00:13:53.983 START TEST nvme_xnvme 00:13:53.983 ************************************ 00:13:53.983 15:22:07 nvme_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:53.984 * Looking for test storage... 00:13:53.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:53.984 15:22:07 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:53.984 15:22:07 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.984 15:22:07 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.984 15:22:07 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.984 15:22:07 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.984 15:22:07 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.984 15:22:07 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.984 15:22:07 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:53.984 15:22:07 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.984 15:22:07 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:53.984 15:22:07 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:53.984 15:22:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:53.984 15:22:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:53.984 ************************************ 00:13:53.984 START TEST xnvme_to_malloc_dd_copy 00:13:53.984 ************************************ 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1123 -- # malloc_to_xnvme_copy 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # return 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:53.984 15:22:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:54.243 { 00:13:54.243 "subsystems": [ 00:13:54.243 { 00:13:54.243 "subsystem": "bdev", 00:13:54.243 "config": [ 00:13:54.243 { 00:13:54.243 "params": { 00:13:54.243 "block_size": 512, 00:13:54.243 "num_blocks": 2097152, 00:13:54.243 "name": "malloc0" 00:13:54.243 }, 00:13:54.243 "method": "bdev_malloc_create" 00:13:54.243 }, 00:13:54.243 { 00:13:54.243 "params": { 00:13:54.243 "io_mechanism": "libaio", 00:13:54.243 "filename": "/dev/nullb0", 00:13:54.243 "name": "null0" 00:13:54.243 }, 00:13:54.243 "method": "bdev_xnvme_create" 00:13:54.243 }, 00:13:54.243 { 00:13:54.243 "method": "bdev_wait_for_examine" 00:13:54.243 } 00:13:54.243 ] 00:13:54.243 } 00:13:54.243 ] 00:13:54.243 } 00:13:54.243 [2024-07-11 15:22:07.663004] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:54.243 [2024-07-11 15:22:07.663408] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74866 ] 00:13:54.243 [2024-07-11 15:22:07.837609] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.501 [2024-07-11 15:22:08.063825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.418  Copying: 182/1024 [MB] (182 MBps) Copying: 367/1024 [MB] (185 MBps) Copying: 552/1024 [MB] (184 MBps) Copying: 739/1024 [MB] (187 MBps) Copying: 926/1024 [MB] (187 MBps) Copying: 1024/1024 [MB] (average 185 MBps) 00:14:04.418 00:14:04.418 15:22:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:04.418 15:22:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:04.418 15:22:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:04.418 15:22:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:04.418 { 00:14:04.418 "subsystems": [ 00:14:04.418 { 00:14:04.418 "subsystem": "bdev", 00:14:04.418 "config": [ 00:14:04.418 { 00:14:04.418 "params": { 00:14:04.418 "block_size": 512, 00:14:04.418 "num_blocks": 2097152, 00:14:04.418 "name": "malloc0" 00:14:04.418 }, 00:14:04.418 "method": "bdev_malloc_create" 00:14:04.418 }, 00:14:04.418 { 00:14:04.418 "params": { 00:14:04.418 "io_mechanism": "libaio", 00:14:04.419 "filename": "/dev/nullb0", 00:14:04.419 "name": "null0" 00:14:04.419 }, 00:14:04.419 "method": "bdev_xnvme_create" 00:14:04.419 }, 00:14:04.419 { 00:14:04.419 "method": "bdev_wait_for_examine" 00:14:04.419 } 00:14:04.419 ] 00:14:04.419 } 00:14:04.419 ] 00:14:04.419 } 00:14:04.419 [2024-07-11 15:22:17.946307] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:04.419 [2024-07-11 15:22:17.946448] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74981 ] 00:14:04.676 [2024-07-11 15:22:18.110249] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.676 [2024-07-11 15:22:18.272630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.825  Copying: 185/1024 [MB] (185 MBps) Copying: 372/1024 [MB] (186 MBps) Copying: 560/1024 [MB] (188 MBps) Copying: 746/1024 [MB] (185 MBps) Copying: 937/1024 [MB] (190 MBps) Copying: 1024/1024 [MB] (average 187 MBps) 00:14:14.825 00:14:14.825 15:22:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:14.825 15:22:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:14.825 15:22:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:14.825 15:22:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:14.825 15:22:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:14.825 15:22:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:14.825 { 00:14:14.825 "subsystems": [ 00:14:14.825 { 00:14:14.825 "subsystem": "bdev", 00:14:14.825 "config": [ 00:14:14.825 { 00:14:14.825 "params": { 00:14:14.825 "block_size": 512, 00:14:14.825 "num_blocks": 2097152, 00:14:14.825 "name": "malloc0" 00:14:14.825 }, 00:14:14.825 "method": "bdev_malloc_create" 00:14:14.825 }, 00:14:14.825 { 00:14:14.825 "params": { 00:14:14.825 "io_mechanism": "io_uring", 00:14:14.825 "filename": "/dev/nullb0", 00:14:14.825 "name": "null0" 00:14:14.825 }, 00:14:14.825 "method": "bdev_xnvme_create" 00:14:14.825 }, 00:14:14.826 { 00:14:14.826 "method": "bdev_wait_for_examine" 00:14:14.826 } 00:14:14.826 ] 00:14:14.826 } 00:14:14.826 ] 00:14:14.826 } 00:14:14.826 [2024-07-11 15:22:28.078433] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:14.826 [2024-07-11 15:22:28.078594] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75091 ] 00:14:14.826 [2024-07-11 15:22:28.250273] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.826 [2024-07-11 15:22:28.409113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.467  Copying: 204/1024 [MB] (204 MBps) Copying: 409/1024 [MB] (204 MBps) Copying: 615/1024 [MB] (206 MBps) Copying: 820/1024 [MB] (204 MBps) Copying: 1024/1024 [MB] (average 204 MBps) 00:14:24.467 00:14:24.467 15:22:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:24.467 15:22:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:24.467 15:22:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:24.467 15:22:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:24.467 { 00:14:24.467 "subsystems": [ 00:14:24.467 { 00:14:24.467 "subsystem": "bdev", 00:14:24.467 "config": [ 00:14:24.467 { 00:14:24.467 "params": { 00:14:24.467 "block_size": 512, 00:14:24.467 "num_blocks": 2097152, 00:14:24.467 "name": "malloc0" 00:14:24.467 }, 00:14:24.467 "method": "bdev_malloc_create" 00:14:24.467 }, 00:14:24.467 { 00:14:24.467 "params": { 00:14:24.467 "io_mechanism": "io_uring", 00:14:24.467 "filename": "/dev/nullb0", 00:14:24.467 "name": "null0" 00:14:24.467 }, 00:14:24.467 "method": "bdev_xnvme_create" 00:14:24.467 }, 00:14:24.467 { 00:14:24.467 "method": "bdev_wait_for_examine" 00:14:24.467 } 00:14:24.467 ] 00:14:24.467 } 00:14:24.467 ] 00:14:24.467 } 00:14:24.467 [2024-07-11 15:22:37.763456] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:24.467 [2024-07-11 15:22:37.764075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75200 ] 00:14:24.467 [2024-07-11 15:22:37.931630] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.726 [2024-07-11 15:22:38.110062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.106  Copying: 208/1024 [MB] (208 MBps) Copying: 414/1024 [MB] (206 MBps) Copying: 621/1024 [MB] (207 MBps) Copying: 827/1024 [MB] (205 MBps) Copying: 1024/1024 [MB] (average 207 MBps) 00:14:34.106 00:14:34.106 15:22:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:14:34.106 15:22:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@195 -- # modprobe -r null_blk 00:14:34.106 ************************************ 00:14:34.106 END TEST xnvme_to_malloc_dd_copy 00:14:34.106 ************************************ 00:14:34.106 00:14:34.106 real 0m39.775s 00:14:34.106 user 0m34.748s 00:14:34.106 sys 0m4.471s 00:14:34.106 15:22:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:34.106 15:22:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:34.106 15:22:47 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:14:34.106 15:22:47 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:34.106 15:22:47 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:34.106 15:22:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.106 15:22:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:34.106 ************************************ 00:14:34.106 START TEST xnvme_bdevperf 00:14:34.106 ************************************ 00:14:34.106 15:22:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1123 -- # xnvme_bdevperf 00:14:34.106 15:22:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:14:34.106 15:22:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:14:34.106 15:22:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:14:34.106 15:22:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # return 00:14:34.106 15:22:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:14:34.106 15:22:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:34.106 15:22:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:14:34.106 15:22:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:14:34.106 15:22:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:14:34.106 15:22:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:14:34.106 15:22:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:14:34.106 15:22:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:14:34.106 15:22:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:34.106 15:22:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:34.106 15:22:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:34.106 15:22:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:34.106 15:22:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:34.106 15:22:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:34.106 15:22:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:34.106 15:22:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:34.106 { 00:14:34.106 "subsystems": [ 00:14:34.106 { 00:14:34.106 "subsystem": "bdev", 00:14:34.106 "config": [ 00:14:34.106 { 00:14:34.106 "params": { 00:14:34.106 "io_mechanism": "libaio", 00:14:34.106 "filename": "/dev/nullb0", 00:14:34.106 "name": "null0" 00:14:34.106 }, 00:14:34.106 "method": "bdev_xnvme_create" 00:14:34.106 }, 00:14:34.106 { 00:14:34.106 "method": "bdev_wait_for_examine" 00:14:34.106 } 00:14:34.106 ] 00:14:34.106 } 00:14:34.106 ] 00:14:34.106 } 00:14:34.106 [2024-07-11 15:22:47.488155] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:34.106 [2024-07-11 15:22:47.488529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75333 ] 00:14:34.106 [2024-07-11 15:22:47.661293] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.365 [2024-07-11 15:22:47.833762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.624 Running I/O for 5 seconds... 00:14:39.937 00:14:39.937 Latency(us) 00:14:39.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.937 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:39.937 null0 : 5.00 130076.38 508.11 0.00 0.00 488.98 154.53 741.00 00:14:39.937 =================================================================================================================== 00:14:39.937 Total : 130076.38 508.11 0.00 0.00 488.98 154.53 741.00 00:14:40.504 15:22:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:40.505 15:22:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:40.763 15:22:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:40.763 15:22:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:40.763 15:22:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:40.763 15:22:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:40.763 { 00:14:40.763 "subsystems": [ 00:14:40.763 { 00:14:40.763 "subsystem": "bdev", 00:14:40.763 "config": [ 00:14:40.763 { 00:14:40.763 "params": { 00:14:40.763 "io_mechanism": "io_uring", 00:14:40.763 "filename": "/dev/nullb0", 00:14:40.763 "name": "null0" 00:14:40.763 }, 00:14:40.763 "method": "bdev_xnvme_create" 00:14:40.763 }, 00:14:40.763 { 00:14:40.763 "method": "bdev_wait_for_examine" 00:14:40.763 } 00:14:40.763 ] 00:14:40.763 } 00:14:40.763 ] 00:14:40.763 } 00:14:40.763 [2024-07-11 15:22:54.217666] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:40.763 [2024-07-11 15:22:54.217877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75407 ] 00:14:41.022 [2024-07-11 15:22:54.391535] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.022 [2024-07-11 15:22:54.551297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.281 Running I/O for 5 seconds... 00:14:46.545 00:14:46.545 Latency(us) 00:14:46.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.545 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:46.545 null0 : 5.00 170634.38 666.54 0.00 0.00 372.11 249.48 580.89 00:14:46.545 =================================================================================================================== 00:14:46.545 Total : 170634.38 666.54 0.00 0.00 372.11 249.48 580.89 00:14:47.480 15:23:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:47.480 15:23:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@195 -- # modprobe -r null_blk 00:14:47.480 ************************************ 00:14:47.480 END TEST xnvme_bdevperf 00:14:47.480 ************************************ 00:14:47.480 00:14:47.480 real 0m13.517s 00:14:47.480 user 0m10.586s 00:14:47.480 sys 0m2.719s 00:14:47.480 15:23:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:47.480 15:23:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:47.480 15:23:00 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:14:47.480 ************************************ 00:14:47.480 END TEST nvme_xnvme 00:14:47.480 ************************************ 00:14:47.480 00:14:47.480 real 0m53.482s 00:14:47.480 user 0m45.400s 00:14:47.480 sys 0m7.303s 00:14:47.480 15:23:00 nvme_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:47.480 15:23:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.480 15:23:00 -- common/autotest_common.sh@1142 -- # return 0 00:14:47.480 15:23:00 -- spdk/autotest.sh@249 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:47.480 15:23:00 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:47.480 15:23:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:47.480 15:23:00 -- common/autotest_common.sh@10 -- # set +x 00:14:47.480 ************************************ 00:14:47.480 START TEST blockdev_xnvme 00:14:47.480 ************************************ 00:14:47.480 15:23:00 blockdev_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:47.480 * Looking for test storage... 00:14:47.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:47.480 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:47.480 15:23:01 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:47.480 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:47.480 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:47.480 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:47.480 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:47.480 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:47.480 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:47.480 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:14:47.480 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:14:47.480 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:14:47.480 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:14:47.480 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@674 -- # uname -s 00:14:47.480 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:14:47.481 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:14:47.481 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:14:47.481 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:14:47.481 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@684 -- # dek= 00:14:47.481 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:14:47.481 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:14:47.481 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:14:47.481 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:14:47.481 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:14:47.481 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:14:47.481 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75547 00:14:47.481 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:47.481 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 75547 00:14:47.481 15:23:01 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:47.481 15:23:01 blockdev_xnvme -- common/autotest_common.sh@829 -- # '[' -z 75547 ']' 00:14:47.481 15:23:01 blockdev_xnvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.481 15:23:01 blockdev_xnvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:47.481 15:23:01 blockdev_xnvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.481 15:23:01 blockdev_xnvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:47.481 15:23:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.737 [2024-07-11 15:23:01.191393] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:47.737 [2024-07-11 15:23:01.191580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75547 ] 00:14:47.995 [2024-07-11 15:23:01.363141] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.995 [2024-07-11 15:23:01.530693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.562 15:23:02 blockdev_xnvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:48.562 15:23:02 blockdev_xnvme -- common/autotest_common.sh@862 -- # return 0 00:14:48.562 15:23:02 blockdev_xnvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:14:48.562 15:23:02 blockdev_xnvme -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:14:48.562 15:23:02 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:14:48.562 15:23:02 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:14:48.562 15:23:02 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:49.130 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:49.130 Waiting for block devices as requested 00:14:49.130 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:49.387 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:49.387 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:49.387 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:54.653 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:14:54.653 nvme0n1 00:14:54.653 nvme1n1 00:14:54.653 nvme2n1 00:14:54.653 nvme2n2 00:14:54.653 nvme2n3 00:14:54.653 nvme3n1 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@740 -- # cat 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:14:54.653 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.653 15:23:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:54.912 15:23:08 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.912 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:14:54.912 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:14:54.912 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "7b718a34-9c4b-4fb8-a33d-fa332c545e5d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "7b718a34-9c4b-4fb8-a33d-fa332c545e5d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "47ddb309-920a-4ca1-8820-097b5fe12b18"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "47ddb309-920a-4ca1-8820-097b5fe12b18",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a8f6b6ee-9aae-4f0d-915b-8c93f427d3f4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a8f6b6ee-9aae-4f0d-915b-8c93f427d3f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "b493f083-21c6-40a6-9187-e4aecc8dbb99"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b493f083-21c6-40a6-9187-e4aecc8dbb99",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "ec314797-b0a6-4075-b34d-5288895e0d65"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ec314797-b0a6-4075-b34d-5288895e0d65",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "622148c3-dd94-4c48-9829-c65558d1585d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "622148c3-dd94-4c48-9829-c65558d1585d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:54.913 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:14:54.913 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:14:54.913 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:14:54.913 15:23:08 blockdev_xnvme -- bdev/blockdev.sh@754 -- # killprocess 75547 00:14:54.913 15:23:08 blockdev_xnvme -- common/autotest_common.sh@948 -- # '[' -z 75547 ']' 00:14:54.913 15:23:08 blockdev_xnvme -- common/autotest_common.sh@952 -- # kill -0 75547 00:14:54.913 15:23:08 blockdev_xnvme -- common/autotest_common.sh@953 -- # uname 00:14:54.913 15:23:08 blockdev_xnvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:54.913 15:23:08 blockdev_xnvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75547 00:14:54.913 killing process with pid 75547 00:14:54.913 15:23:08 blockdev_xnvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:54.913 15:23:08 blockdev_xnvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:54.913 15:23:08 blockdev_xnvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75547' 00:14:54.913 15:23:08 blockdev_xnvme -- common/autotest_common.sh@967 -- # kill 75547 00:14:54.913 15:23:08 blockdev_xnvme -- common/autotest_common.sh@972 -- # wait 75547 00:14:56.816 15:23:10 blockdev_xnvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:56.816 15:23:10 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:56.816 15:23:10 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:14:56.816 15:23:10 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.816 15:23:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:56.816 ************************************ 00:14:56.816 START TEST bdev_hello_world 00:14:56.816 ************************************ 00:14:56.816 15:23:10 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:56.816 [2024-07-11 15:23:10.355073] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:56.816 [2024-07-11 15:23:10.355246] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75912 ] 00:14:57.075 [2024-07-11 15:23:10.526127] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.075 [2024-07-11 15:23:10.689475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.641 [2024-07-11 15:23:11.046258] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:57.641 [2024-07-11 15:23:11.046311] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:57.641 [2024-07-11 15:23:11.046349] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:57.641 [2024-07-11 15:23:11.048418] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:57.641 [2024-07-11 15:23:11.048730] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:57.641 [2024-07-11 15:23:11.048757] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:57.641 [2024-07-11 15:23:11.048952] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:57.641 00:14:57.641 [2024-07-11 15:23:11.048982] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:58.576 00:14:58.576 real 0m1.784s 00:14:58.576 user 0m1.481s 00:14:58.576 sys 0m0.188s 00:14:58.576 15:23:12 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:58.576 ************************************ 00:14:58.576 END TEST bdev_hello_world 00:14:58.576 ************************************ 00:14:58.576 15:23:12 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:58.576 15:23:12 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:14:58.576 15:23:12 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:14:58.576 15:23:12 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:58.576 15:23:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.576 15:23:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:58.576 ************************************ 00:14:58.576 START TEST bdev_bounds 00:14:58.576 ************************************ 00:14:58.576 15:23:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:14:58.577 Process bdevio pid: 75953 00:14:58.577 15:23:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=75953 00:14:58.577 15:23:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:58.577 15:23:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:58.577 15:23:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 75953' 00:14:58.577 15:23:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 75953 00:14:58.577 15:23:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 75953 ']' 00:14:58.577 15:23:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.577 15:23:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.577 15:23:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.577 15:23:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.577 15:23:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:58.577 [2024-07-11 15:23:12.171403] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:58.577 [2024-07-11 15:23:12.171551] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75953 ] 00:14:58.835 [2024-07-11 15:23:12.329637] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:59.093 [2024-07-11 15:23:12.492877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.093 [2024-07-11 15:23:12.492981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.093 [2024-07-11 15:23:12.493005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.697 15:23:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.697 15:23:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:14:59.697 15:23:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:59.697 I/O targets: 00:14:59.697 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:59.697 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:59.697 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:59.697 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:59.697 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:59.697 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:59.697 00:14:59.697 00:14:59.697 CUnit - A unit testing framework for C - Version 2.1-3 00:14:59.697 http://cunit.sourceforge.net/ 00:14:59.697 00:14:59.697 00:14:59.697 Suite: bdevio tests on: nvme3n1 00:14:59.697 Test: blockdev write read block ...passed 00:14:59.697 Test: blockdev write zeroes read block ...passed 00:14:59.697 Test: blockdev write zeroes read no split ...passed 00:14:59.697 Test: blockdev write zeroes read split ...passed 00:14:59.697 Test: blockdev write zeroes read split partial ...passed 00:14:59.697 Test: blockdev reset ...passed 00:14:59.697 Test: blockdev write read 8 blocks ...passed 00:14:59.697 Test: blockdev write read size > 128k ...passed 00:14:59.697 Test: blockdev write read invalid size ...passed 00:14:59.697 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:59.697 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:59.697 Test: blockdev write read max offset ...passed 00:14:59.697 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:59.697 Test: blockdev writev readv 8 blocks ...passed 00:14:59.697 Test: blockdev writev readv 30 x 1block ...passed 00:14:59.697 Test: blockdev writev readv block ...passed 00:14:59.697 Test: blockdev writev readv size > 128k ...passed 00:14:59.697 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:59.697 Test: blockdev comparev and writev ...passed 00:14:59.697 Test: blockdev nvme passthru rw ...passed 00:14:59.697 Test: blockdev nvme passthru vendor specific ...passed 00:14:59.697 Test: blockdev nvme admin passthru ...passed 00:14:59.697 Test: blockdev copy ...passed 00:14:59.697 Suite: bdevio tests on: nvme2n3 00:14:59.697 Test: blockdev write read block ...passed 00:14:59.697 Test: blockdev write zeroes read block ...passed 00:14:59.697 Test: blockdev write zeroes read no split ...passed 00:14:59.697 Test: blockdev write zeroes read split ...passed 00:14:59.956 Test: blockdev write zeroes read split partial ...passed 00:14:59.956 Test: blockdev reset ...passed 00:14:59.956 Test: blockdev write read 8 blocks ...passed 00:14:59.956 Test: blockdev write read size > 128k ...passed 00:14:59.956 Test: blockdev write read invalid size ...passed 00:14:59.956 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:59.956 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:59.956 Test: blockdev write read max offset ...passed 00:14:59.956 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:59.956 Test: blockdev writev readv 8 blocks ...passed 00:14:59.956 Test: blockdev writev readv 30 x 1block ...passed 00:14:59.956 Test: blockdev writev readv block ...passed 00:14:59.956 Test: blockdev writev readv size > 128k ...passed 00:14:59.956 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:59.956 Test: blockdev comparev and writev ...passed 00:14:59.956 Test: blockdev nvme passthru rw ...passed 00:14:59.956 Test: blockdev nvme passthru vendor specific ...passed 00:14:59.956 Test: blockdev nvme admin passthru ...passed 00:14:59.956 Test: blockdev copy ...passed 00:14:59.956 Suite: bdevio tests on: nvme2n2 00:14:59.956 Test: blockdev write read block ...passed 00:14:59.956 Test: blockdev write zeroes read block ...passed 00:14:59.956 Test: blockdev write zeroes read no split ...passed 00:14:59.956 Test: blockdev write zeroes read split ...passed 00:14:59.956 Test: blockdev write zeroes read split partial ...passed 00:14:59.956 Test: blockdev reset ...passed 00:14:59.956 Test: blockdev write read 8 blocks ...passed 00:14:59.956 Test: blockdev write read size > 128k ...passed 00:14:59.956 Test: blockdev write read invalid size ...passed 00:14:59.956 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:59.956 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:59.956 Test: blockdev write read max offset ...passed 00:14:59.956 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:59.956 Test: blockdev writev readv 8 blocks ...passed 00:14:59.956 Test: blockdev writev readv 30 x 1block ...passed 00:14:59.956 Test: blockdev writev readv block ...passed 00:14:59.956 Test: blockdev writev readv size > 128k ...passed 00:14:59.956 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:59.956 Test: blockdev comparev and writev ...passed 00:14:59.956 Test: blockdev nvme passthru rw ...passed 00:14:59.956 Test: blockdev nvme passthru vendor specific ...passed 00:14:59.956 Test: blockdev nvme admin passthru ...passed 00:14:59.956 Test: blockdev copy ...passed 00:14:59.956 Suite: bdevio tests on: nvme2n1 00:14:59.956 Test: blockdev write read block ...passed 00:14:59.956 Test: blockdev write zeroes read block ...passed 00:14:59.956 Test: blockdev write zeroes read no split ...passed 00:14:59.956 Test: blockdev write zeroes read split ...passed 00:14:59.956 Test: blockdev write zeroes read split partial ...passed 00:14:59.956 Test: blockdev reset ...passed 00:14:59.956 Test: blockdev write read 8 blocks ...passed 00:14:59.956 Test: blockdev write read size > 128k ...passed 00:14:59.956 Test: blockdev write read invalid size ...passed 00:14:59.956 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:59.956 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:59.956 Test: blockdev write read max offset ...passed 00:14:59.956 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:59.956 Test: blockdev writev readv 8 blocks ...passed 00:14:59.956 Test: blockdev writev readv 30 x 1block ...passed 00:14:59.956 Test: blockdev writev readv block ...passed 00:14:59.956 Test: blockdev writev readv size > 128k ...passed 00:14:59.956 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:59.956 Test: blockdev comparev and writev ...passed 00:14:59.956 Test: blockdev nvme passthru rw ...passed 00:14:59.956 Test: blockdev nvme passthru vendor specific ...passed 00:14:59.956 Test: blockdev nvme admin passthru ...passed 00:14:59.956 Test: blockdev copy ...passed 00:14:59.956 Suite: bdevio tests on: nvme1n1 00:14:59.956 Test: blockdev write read block ...passed 00:14:59.956 Test: blockdev write zeroes read block ...passed 00:14:59.956 Test: blockdev write zeroes read no split ...passed 00:14:59.956 Test: blockdev write zeroes read split ...passed 00:14:59.956 Test: blockdev write zeroes read split partial ...passed 00:14:59.956 Test: blockdev reset ...passed 00:14:59.956 Test: blockdev write read 8 blocks ...passed 00:14:59.956 Test: blockdev write read size > 128k ...passed 00:14:59.956 Test: blockdev write read invalid size ...passed 00:14:59.956 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:59.956 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:59.956 Test: blockdev write read max offset ...passed 00:14:59.956 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:59.956 Test: blockdev writev readv 8 blocks ...passed 00:14:59.956 Test: blockdev writev readv 30 x 1block ...passed 00:14:59.956 Test: blockdev writev readv block ...passed 00:14:59.956 Test: blockdev writev readv size > 128k ...passed 00:14:59.956 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:59.956 Test: blockdev comparev and writev ...passed 00:14:59.956 Test: blockdev nvme passthru rw ...passed 00:14:59.956 Test: blockdev nvme passthru vendor specific ...passed 00:14:59.956 Test: blockdev nvme admin passthru ...passed 00:14:59.956 Test: blockdev copy ...passed 00:14:59.956 Suite: bdevio tests on: nvme0n1 00:14:59.956 Test: blockdev write read block ...passed 00:14:59.956 Test: blockdev write zeroes read block ...passed 00:14:59.956 Test: blockdev write zeroes read no split ...passed 00:15:00.214 Test: blockdev write zeroes read split ...passed 00:15:00.214 Test: blockdev write zeroes read split partial ...passed 00:15:00.214 Test: blockdev reset ...passed 00:15:00.214 Test: blockdev write read 8 blocks ...passed 00:15:00.214 Test: blockdev write read size > 128k ...passed 00:15:00.214 Test: blockdev write read invalid size ...passed 00:15:00.214 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:00.214 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:00.214 Test: blockdev write read max offset ...passed 00:15:00.214 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:00.214 Test: blockdev writev readv 8 blocks ...passed 00:15:00.214 Test: blockdev writev readv 30 x 1block ...passed 00:15:00.214 Test: blockdev writev readv block ...passed 00:15:00.214 Test: blockdev writev readv size > 128k ...passed 00:15:00.214 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:00.214 Test: blockdev comparev and writev ...passed 00:15:00.214 Test: blockdev nvme passthru rw ...passed 00:15:00.214 Test: blockdev nvme passthru vendor specific ...passed 00:15:00.214 Test: blockdev nvme admin passthru ...passed 00:15:00.214 Test: blockdev copy ...passed 00:15:00.214 00:15:00.214 Run Summary: Type Total Ran Passed Failed Inactive 00:15:00.214 suites 6 6 n/a 0 0 00:15:00.214 tests 138 138 138 0 0 00:15:00.214 asserts 780 780 780 0 n/a 00:15:00.214 00:15:00.214 Elapsed time = 1.087 seconds 00:15:00.214 0 00:15:00.214 15:23:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 75953 00:15:00.214 15:23:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 75953 ']' 00:15:00.214 15:23:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 75953 00:15:00.214 15:23:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:15:00.214 15:23:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:00.214 15:23:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75953 00:15:00.214 15:23:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:00.214 15:23:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:00.214 15:23:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75953' 00:15:00.214 killing process with pid 75953 00:15:00.214 15:23:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 75953 00:15:00.214 15:23:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 75953 00:15:01.193 15:23:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:15:01.193 00:15:01.193 real 0m2.667s 00:15:01.193 user 0m6.456s 00:15:01.193 sys 0m0.349s 00:15:01.193 15:23:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:01.193 15:23:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:01.193 ************************************ 00:15:01.193 END TEST bdev_bounds 00:15:01.193 ************************************ 00:15:01.193 15:23:14 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:01.193 15:23:14 blockdev_xnvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:01.193 15:23:14 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:01.193 15:23:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:01.193 15:23:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:01.193 ************************************ 00:15:01.193 START TEST bdev_nbd 00:15:01.193 ************************************ 00:15:01.193 15:23:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:15:01.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=76019 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 76019 /var/tmp/spdk-nbd.sock 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 76019 ']' 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:01.451 15:23:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:01.451 [2024-07-11 15:23:14.914341] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:01.451 [2024-07-11 15:23:14.914772] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.710 [2024-07-11 15:23:15.087724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.710 [2024-07-11 15:23:15.247371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.275 15:23:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.275 15:23:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:15:02.275 15:23:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:02.275 15:23:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:02.275 15:23:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:02.275 15:23:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:02.275 15:23:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:02.275 15:23:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:02.275 15:23:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:02.275 15:23:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:02.275 15:23:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:02.275 15:23:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:02.275 15:23:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:02.275 15:23:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:02.275 15:23:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:02.533 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:02.533 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:02.533 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:02.533 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:15:02.533 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:02.533 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:02.533 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:02.533 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:15:02.533 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:02.533 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:02.533 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:02.533 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:02.533 1+0 records in 00:15:02.533 1+0 records out 00:15:02.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000628946 s, 6.5 MB/s 00:15:02.533 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.533 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:02.533 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.533 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:02.533 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:02.533 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:02.533 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:02.533 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:02.792 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:02.792 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:02.792 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:02.792 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:15:02.792 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:02.792 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:02.792 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:02.792 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:15:02.792 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:02.792 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:02.792 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:02.792 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:02.792 1+0 records in 00:15:02.792 1+0 records out 00:15:02.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621439 s, 6.6 MB/s 00:15:02.792 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.792 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:02.792 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.792 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:02.792 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:02.792 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:02.792 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:02.792 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:03.050 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:03.050 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:03.050 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:03.050 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:15:03.050 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:03.050 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:03.050 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:03.050 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:15:03.050 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:03.050 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:03.050 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:03.050 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.050 1+0 records in 00:15:03.050 1+0 records out 00:15:03.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638867 s, 6.4 MB/s 00:15:03.050 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.050 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:03.050 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.050 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:03.050 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:03.050 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:03.050 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:03.050 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:15:03.308 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:03.308 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:03.308 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:03.308 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:15:03.308 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:03.308 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:03.308 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:03.308 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:15:03.308 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:03.308 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:03.308 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:03.308 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.308 1+0 records in 00:15:03.308 1+0 records out 00:15:03.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000846174 s, 4.8 MB/s 00:15:03.308 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.308 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:03.308 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.308 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:03.308 15:23:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:03.308 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:03.308 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:03.308 15:23:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:15:03.566 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:03.566 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:03.566 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:03.566 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:15:03.566 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:03.566 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:03.566 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:03.566 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:15:03.566 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:03.566 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:03.566 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:03.566 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.566 1+0 records in 00:15:03.566 1+0 records out 00:15:03.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00166393 s, 2.5 MB/s 00:15:03.566 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.566 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:03.566 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.566 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:03.566 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:03.566 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:03.566 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:03.566 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.133 1+0 records in 00:15:04.133 1+0 records out 00:15:04.133 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106206 s, 3.9 MB/s 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:04.133 { 00:15:04.133 "nbd_device": "/dev/nbd0", 00:15:04.133 "bdev_name": "nvme0n1" 00:15:04.133 }, 00:15:04.133 { 00:15:04.133 "nbd_device": "/dev/nbd1", 00:15:04.133 "bdev_name": "nvme1n1" 00:15:04.133 }, 00:15:04.133 { 00:15:04.133 "nbd_device": "/dev/nbd2", 00:15:04.133 "bdev_name": "nvme2n1" 00:15:04.133 }, 00:15:04.133 { 00:15:04.133 "nbd_device": "/dev/nbd3", 00:15:04.133 "bdev_name": "nvme2n2" 00:15:04.133 }, 00:15:04.133 { 00:15:04.133 "nbd_device": "/dev/nbd4", 00:15:04.133 "bdev_name": "nvme2n3" 00:15:04.133 }, 00:15:04.133 { 00:15:04.133 "nbd_device": "/dev/nbd5", 00:15:04.133 "bdev_name": "nvme3n1" 00:15:04.133 } 00:15:04.133 ]' 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:04.133 { 00:15:04.133 "nbd_device": "/dev/nbd0", 00:15:04.133 "bdev_name": "nvme0n1" 00:15:04.133 }, 00:15:04.133 { 00:15:04.133 "nbd_device": "/dev/nbd1", 00:15:04.133 "bdev_name": "nvme1n1" 00:15:04.133 }, 00:15:04.133 { 00:15:04.133 "nbd_device": "/dev/nbd2", 00:15:04.133 "bdev_name": "nvme2n1" 00:15:04.133 }, 00:15:04.133 { 00:15:04.133 "nbd_device": "/dev/nbd3", 00:15:04.133 "bdev_name": "nvme2n2" 00:15:04.133 }, 00:15:04.133 { 00:15:04.133 "nbd_device": "/dev/nbd4", 00:15:04.133 "bdev_name": "nvme2n3" 00:15:04.133 }, 00:15:04.133 { 00:15:04.133 "nbd_device": "/dev/nbd5", 00:15:04.133 "bdev_name": "nvme3n1" 00:15:04.133 } 00:15:04.133 ]' 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:04.133 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:04.134 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:04.134 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:04.134 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.134 15:23:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:04.392 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:04.651 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:04.651 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:04.651 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.651 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.651 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:04.651 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:04.651 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.651 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.651 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:04.910 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:04.910 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:04.910 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:04.910 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.910 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.910 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:04.910 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:04.910 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.910 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.910 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:04.910 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:04.910 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:04.910 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:04.910 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.910 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.910 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:04.910 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:04.910 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.910 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.910 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:05.168 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:05.168 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:05.168 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:05.168 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.168 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.168 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:05.168 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:05.168 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.168 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.168 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:05.428 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:05.429 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:05.429 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:05.429 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.429 15:23:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.429 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:05.429 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:05.429 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.429 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.429 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:05.704 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:05.704 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:05.704 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:05.704 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.704 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.704 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:05.704 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:05.704 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.704 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:05.704 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:05.704 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:06.008 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:06.266 /dev/nbd0 00:15:06.266 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:06.266 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:06.266 15:23:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:15:06.266 15:23:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:06.266 15:23:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:06.266 15:23:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:06.266 15:23:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:15:06.266 15:23:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:06.266 15:23:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:06.266 15:23:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:06.266 15:23:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:06.266 1+0 records in 00:15:06.266 1+0 records out 00:15:06.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047748 s, 8.6 MB/s 00:15:06.266 15:23:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.266 15:23:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:06.266 15:23:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.266 15:23:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:06.266 15:23:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:06.266 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:06.266 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:06.266 15:23:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:15:06.525 /dev/nbd1 00:15:06.525 15:23:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:06.525 15:23:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:06.525 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:15:06.525 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:06.525 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:06.525 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:06.525 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:15:06.525 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:06.525 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:06.525 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:06.525 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:06.525 1+0 records in 00:15:06.525 1+0 records out 00:15:06.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000632718 s, 6.5 MB/s 00:15:06.525 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.525 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:06.525 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.525 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:06.525 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:06.525 15:23:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:06.525 15:23:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:06.525 15:23:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:15:07.092 /dev/nbd10 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.092 1+0 records in 00:15:07.092 1+0 records out 00:15:07.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000761114 s, 5.4 MB/s 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:15:07.092 /dev/nbd11 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.092 1+0 records in 00:15:07.092 1+0 records out 00:15:07.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000949049 s, 4.3 MB/s 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:07.092 15:23:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:15:07.351 /dev/nbd12 00:15:07.351 15:23:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:07.351 15:23:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:07.351 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:15:07.351 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:07.351 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:07.351 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:07.351 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:15:07.351 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:07.351 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:07.351 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:07.351 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.351 1+0 records in 00:15:07.351 1+0 records out 00:15:07.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000804708 s, 5.1 MB/s 00:15:07.351 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.351 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:07.351 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.351 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:07.351 15:23:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:07.351 15:23:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.351 15:23:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:07.351 15:23:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:07.610 /dev/nbd13 00:15:07.610 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:07.610 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:07.610 15:23:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:15:07.610 15:23:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:15:07.610 15:23:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:15:07.610 15:23:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:15:07.610 15:23:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:15:07.610 15:23:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:15:07.610 15:23:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:15:07.610 15:23:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:15:07.610 15:23:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.610 1+0 records in 00:15:07.610 1+0 records out 00:15:07.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000955717 s, 4.3 MB/s 00:15:07.610 15:23:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.610 15:23:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:15:07.610 15:23:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.610 15:23:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:15:07.610 15:23:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:15:07.610 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.610 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:07.610 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:07.610 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:07.610 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:07.869 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:07.869 { 00:15:07.869 "nbd_device": "/dev/nbd0", 00:15:07.869 "bdev_name": "nvme0n1" 00:15:07.869 }, 00:15:07.869 { 00:15:07.869 "nbd_device": "/dev/nbd1", 00:15:07.869 "bdev_name": "nvme1n1" 00:15:07.869 }, 00:15:07.869 { 00:15:07.869 "nbd_device": "/dev/nbd10", 00:15:07.869 "bdev_name": "nvme2n1" 00:15:07.869 }, 00:15:07.869 { 00:15:07.869 "nbd_device": "/dev/nbd11", 00:15:07.869 "bdev_name": "nvme2n2" 00:15:07.869 }, 00:15:07.869 { 00:15:07.869 "nbd_device": "/dev/nbd12", 00:15:07.869 "bdev_name": "nvme2n3" 00:15:07.869 }, 00:15:07.869 { 00:15:07.869 "nbd_device": "/dev/nbd13", 00:15:07.869 "bdev_name": "nvme3n1" 00:15:07.869 } 00:15:07.869 ]' 00:15:07.869 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:07.869 { 00:15:07.869 "nbd_device": "/dev/nbd0", 00:15:07.869 "bdev_name": "nvme0n1" 00:15:07.869 }, 00:15:07.869 { 00:15:07.869 "nbd_device": "/dev/nbd1", 00:15:07.869 "bdev_name": "nvme1n1" 00:15:07.869 }, 00:15:07.869 { 00:15:07.869 "nbd_device": "/dev/nbd10", 00:15:07.869 "bdev_name": "nvme2n1" 00:15:07.869 }, 00:15:07.869 { 00:15:07.869 "nbd_device": "/dev/nbd11", 00:15:07.869 "bdev_name": "nvme2n2" 00:15:07.869 }, 00:15:07.869 { 00:15:07.869 "nbd_device": "/dev/nbd12", 00:15:07.869 "bdev_name": "nvme2n3" 00:15:07.869 }, 00:15:07.869 { 00:15:07.869 "nbd_device": "/dev/nbd13", 00:15:07.869 "bdev_name": "nvme3n1" 00:15:07.869 } 00:15:07.869 ]' 00:15:07.869 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:08.128 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:08.128 /dev/nbd1 00:15:08.128 /dev/nbd10 00:15:08.128 /dev/nbd11 00:15:08.128 /dev/nbd12 00:15:08.128 /dev/nbd13' 00:15:08.128 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:08.128 /dev/nbd1 00:15:08.128 /dev/nbd10 00:15:08.128 /dev/nbd11 00:15:08.128 /dev/nbd12 00:15:08.128 /dev/nbd13' 00:15:08.128 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:08.128 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:08.128 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:08.128 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:08.128 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:08.128 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:08.128 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:08.128 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:08.128 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:08.128 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:08.128 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:08.128 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:08.128 256+0 records in 00:15:08.128 256+0 records out 00:15:08.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00709157 s, 148 MB/s 00:15:08.128 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:08.128 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:08.128 256+0 records in 00:15:08.128 256+0 records out 00:15:08.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169561 s, 6.2 MB/s 00:15:08.128 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:08.128 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:08.386 256+0 records in 00:15:08.386 256+0 records out 00:15:08.386 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186228 s, 5.6 MB/s 00:15:08.386 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:08.386 15:23:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:08.645 256+0 records in 00:15:08.645 256+0 records out 00:15:08.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16632 s, 6.3 MB/s 00:15:08.645 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:08.645 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:08.645 256+0 records in 00:15:08.645 256+0 records out 00:15:08.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138507 s, 7.6 MB/s 00:15:08.645 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:08.645 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:08.904 256+0 records in 00:15:08.904 256+0 records out 00:15:08.904 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169901 s, 6.2 MB/s 00:15:08.904 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:08.904 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:09.163 256+0 records in 00:15:09.163 256+0 records out 00:15:09.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167934 s, 6.2 MB/s 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:09.163 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:09.421 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:09.421 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:09.421 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:09.421 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.421 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.421 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:09.421 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:09.421 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.421 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:09.421 15:23:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:09.679 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:09.679 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:09.679 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:09.679 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.679 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.679 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:09.679 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:09.679 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.679 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:09.679 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:09.936 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:09.936 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:09.936 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:09.936 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.936 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.936 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:09.936 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:09.936 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:09.936 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:09.936 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:10.194 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:10.194 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:10.194 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:10.194 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:10.194 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:10.194 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:10.194 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:10.194 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:10.194 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:10.194 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:10.452 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:10.453 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:10.453 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:10.453 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:10.453 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:10.453 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:10.453 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:10.453 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:10.453 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:10.453 15:23:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:10.728 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:10.728 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:10.728 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:10.728 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:10.728 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:10.728 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:10.728 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:10.728 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:10.728 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:10.728 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:10.728 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:10.985 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:10.985 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:10.985 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:10.985 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:10.985 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:10.985 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:10.985 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:10.985 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:10.985 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:10.985 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:10.985 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:10.985 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:10.985 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:10.985 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:10.985 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:10.985 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:15:10.985 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:15:10.985 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:11.243 malloc_lvol_verify 00:15:11.243 15:23:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:11.501 837b0e89-ccc3-4e19-a947-9f7d34974110 00:15:11.501 15:23:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:11.759 3ca35626-5973-4c96-88cc-8de9f98a0c2f 00:15:11.760 15:23:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:12.017 /dev/nbd0 00:15:12.017 15:23:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:15:12.017 mke2fs 1.46.5 (30-Dec-2021) 00:15:12.017 Discarding device blocks: 0/4096 done 00:15:12.017 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:12.017 00:15:12.017 Allocating group tables: 0/1 done 00:15:12.017 Writing inode tables: 0/1 done 00:15:12.017 Creating journal (1024 blocks): done 00:15:12.017 Writing superblocks and filesystem accounting information: 0/1 done 00:15:12.017 00:15:12.017 15:23:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:15:12.017 15:23:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:12.017 15:23:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:12.017 15:23:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:12.017 15:23:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:12.017 15:23:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:12.017 15:23:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.017 15:23:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:12.276 15:23:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:12.276 15:23:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:12.276 15:23:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:12.276 15:23:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.276 15:23:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.276 15:23:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:12.276 15:23:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:12.276 15:23:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.276 15:23:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:15:12.276 15:23:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:15:12.276 15:23:25 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 76019 00:15:12.276 15:23:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 76019 ']' 00:15:12.276 15:23:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 76019 00:15:12.276 15:23:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:15:12.276 15:23:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:12.276 15:23:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76019 00:15:12.276 15:23:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:12.276 15:23:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:12.276 killing process with pid 76019 00:15:12.276 15:23:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76019' 00:15:12.276 15:23:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 76019 00:15:12.276 15:23:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 76019 00:15:13.651 15:23:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:15:13.651 00:15:13.651 real 0m12.062s 00:15:13.651 user 0m17.031s 00:15:13.651 sys 0m3.884s 00:15:13.651 15:23:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:13.651 15:23:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:13.651 ************************************ 00:15:13.651 END TEST bdev_nbd 00:15:13.651 ************************************ 00:15:13.652 15:23:26 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:13.652 15:23:26 blockdev_xnvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:15:13.652 15:23:26 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:15:13.652 15:23:26 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:15:13.652 15:23:26 blockdev_xnvme -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:15:13.652 15:23:26 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:13.652 15:23:26 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:13.652 15:23:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.652 ************************************ 00:15:13.652 START TEST bdev_fio 00:15:13.652 ************************************ 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:13.652 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n2]' 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n2 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n3]' 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n3 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:13.652 ************************************ 00:15:13.652 START TEST bdev_fio_rw_verify 00:15:13.652 ************************************ 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:15:13.652 15:23:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:13.652 15:23:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:13.652 15:23:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:13.652 15:23:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:13.652 15:23:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:15:13.652 15:23:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:13.652 15:23:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:13.652 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:13.652 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:13.652 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:13.652 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:13.652 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:13.652 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:13.652 fio-3.35 00:15:13.652 Starting 6 threads 00:15:25.858 00:15:25.859 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=76434: Thu Jul 11 15:23:37 2024 00:15:25.859 read: IOPS=28.7k, BW=112MiB/s (118MB/s)(1123MiB/10001msec) 00:15:25.859 slat (usec): min=3, max=1846, avg= 7.07, stdev= 4.95 00:15:25.859 clat (usec): min=89, max=4181, avg=662.44, stdev=207.83 00:15:25.859 lat (usec): min=97, max=4190, avg=669.51, stdev=208.50 00:15:25.859 clat percentiles (usec): 00:15:25.859 | 50.000th=[ 693], 99.000th=[ 1156], 99.900th=[ 1663], 99.990th=[ 2704], 00:15:25.859 | 99.999th=[ 4178] 00:15:25.859 write: IOPS=29.0k, BW=113MiB/s (119MB/s)(1134MiB/10001msec); 0 zone resets 00:15:25.859 slat (usec): min=14, max=2858, avg=24.62, stdev=22.37 00:15:25.859 clat (usec): min=89, max=3878, avg=736.54, stdev=211.73 00:15:25.859 lat (usec): min=104, max=3919, avg=761.17, stdev=212.98 00:15:25.859 clat percentiles (usec): 00:15:25.859 | 50.000th=[ 750], 99.000th=[ 1287], 99.900th=[ 1778], 99.990th=[ 2409], 00:15:25.859 | 99.999th=[ 3818] 00:15:25.859 bw ( KiB/s): min=97597, max=143448, per=99.77%, avg=115826.53, stdev=2278.65, samples=114 00:15:25.859 iops : min=24397, max=35862, avg=28956.37, stdev=569.68, samples=114 00:15:25.859 lat (usec) : 100=0.01%, 250=2.37%, 500=14.74%, 750=40.70%, 1000=36.22% 00:15:25.859 lat (msec) : 2=5.93%, 4=0.03%, 10=0.01% 00:15:25.859 cpu : usr=62.83%, sys=25.30%, ctx=6849, majf=0, minf=24488 00:15:25.859 IO depths : 1=12.1%, 2=24.7%, 4=50.4%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:25.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.859 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.859 issued rwts: total=287407,290261,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.859 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:25.859 00:15:25.859 Run status group 0 (all jobs): 00:15:25.859 READ: bw=112MiB/s (118MB/s), 112MiB/s-112MiB/s (118MB/s-118MB/s), io=1123MiB (1177MB), run=10001-10001msec 00:15:25.859 WRITE: bw=113MiB/s (119MB/s), 113MiB/s-113MiB/s (119MB/s-119MB/s), io=1134MiB (1189MB), run=10001-10001msec 00:15:25.859 ----------------------------------------------------- 00:15:25.859 Suppressions used: 00:15:25.859 count bytes template 00:15:25.859 6 48 /usr/src/fio/parse.c 00:15:25.859 2650 254400 /usr/src/fio/iolog.c 00:15:25.859 1 8 libtcmalloc_minimal.so 00:15:25.859 1 904 libcrypto.so 00:15:25.859 ----------------------------------------------------- 00:15:25.859 00:15:25.859 00:15:25.859 real 0m12.193s 00:15:25.859 user 0m39.462s 00:15:25.859 sys 0m15.491s 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:25.859 ************************************ 00:15:25.859 END TEST bdev_fio_rw_verify 00:15:25.859 ************************************ 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "7b718a34-9c4b-4fb8-a33d-fa332c545e5d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "7b718a34-9c4b-4fb8-a33d-fa332c545e5d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "47ddb309-920a-4ca1-8820-097b5fe12b18"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "47ddb309-920a-4ca1-8820-097b5fe12b18",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "a8f6b6ee-9aae-4f0d-915b-8c93f427d3f4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a8f6b6ee-9aae-4f0d-915b-8c93f427d3f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "b493f083-21c6-40a6-9187-e4aecc8dbb99"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b493f083-21c6-40a6-9187-e4aecc8dbb99",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "ec314797-b0a6-4075-b34d-5288895e0d65"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ec314797-b0a6-4075-b34d-5288895e0d65",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "622148c3-dd94-4c48-9829-c65558d1585d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "622148c3-dd94-4c48-9829-c65558d1585d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:25.859 /home/vagrant/spdk_repo/spdk 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:15:25.859 00:15:25.859 real 0m12.380s 00:15:25.859 user 0m39.565s 00:15:25.859 sys 0m15.572s 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:25.859 15:23:39 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:25.859 ************************************ 00:15:25.859 END TEST bdev_fio 00:15:25.859 ************************************ 00:15:25.859 15:23:39 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:25.859 15:23:39 blockdev_xnvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:25.859 15:23:39 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:25.859 15:23:39 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:15:25.859 15:23:39 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:25.859 15:23:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.859 ************************************ 00:15:25.859 START TEST bdev_verify 00:15:25.859 ************************************ 00:15:25.859 15:23:39 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:25.859 [2024-07-11 15:23:39.459733] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:25.859 [2024-07-11 15:23:39.459948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76610 ] 00:15:26.117 [2024-07-11 15:23:39.635273] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:26.375 [2024-07-11 15:23:39.848052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.375 [2024-07-11 15:23:39.848085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.941 Running I/O for 5 seconds... 00:15:32.207 00:15:32.207 Latency(us) 00:15:32.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.207 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:32.207 Verification LBA range: start 0x0 length 0xa0000 00:15:32.207 nvme0n1 : 5.07 1765.71 6.90 0.00 0.00 72366.41 10128.29 74353.57 00:15:32.207 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:32.207 Verification LBA range: start 0xa0000 length 0xa0000 00:15:32.207 nvme0n1 : 5.03 1781.07 6.96 0.00 0.00 71742.10 11617.75 84839.33 00:15:32.207 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:32.207 Verification LBA range: start 0x0 length 0xbd0bd 00:15:32.207 nvme1n1 : 5.06 2773.59 10.83 0.00 0.00 45914.63 4796.04 96754.97 00:15:32.207 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:32.207 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:32.207 nvme1n1 : 5.05 2559.04 10.00 0.00 0.00 49806.88 5302.46 104380.97 00:15:32.207 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:32.207 Verification LBA range: start 0x0 length 0x80000 00:15:32.207 nvme2n1 : 5.08 1739.58 6.80 0.00 0.00 73082.80 8579.26 74830.20 00:15:32.207 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:32.207 Verification LBA range: start 0x80000 length 0x80000 00:15:32.207 nvme2n1 : 5.05 1797.83 7.02 0.00 0.00 70797.61 7238.75 74830.20 00:15:32.207 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:32.207 Verification LBA range: start 0x0 length 0x80000 00:15:32.207 nvme2n2 : 5.07 1742.43 6.81 0.00 0.00 72820.65 11796.48 66727.56 00:15:32.207 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:32.207 Verification LBA range: start 0x80000 length 0x80000 00:15:32.207 nvme2n2 : 5.05 1775.33 6.93 0.00 0.00 71588.19 7923.90 70063.94 00:15:32.207 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:32.207 Verification LBA range: start 0x0 length 0x80000 00:15:32.207 nvme2n3 : 5.07 1767.03 6.90 0.00 0.00 71660.64 11796.48 67204.19 00:15:32.207 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:32.207 Verification LBA range: start 0x80000 length 0x80000 00:15:32.207 nvme2n3 : 5.06 1797.06 7.02 0.00 0.00 70587.37 6017.40 70063.94 00:15:32.207 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:32.207 Verification LBA range: start 0x0 length 0x20000 00:15:32.207 nvme3n1 : 5.07 1766.37 6.90 0.00 0.00 71580.07 12273.11 77689.95 00:15:32.207 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:32.207 Verification LBA range: start 0x20000 length 0x20000 00:15:32.207 nvme3n1 : 5.06 1795.94 7.02 0.00 0.00 70486.93 7745.16 79119.83 00:15:32.207 =================================================================================================================== 00:15:32.207 Total : 23060.98 90.08 0.00 0.00 66143.02 4796.04 104380.97 00:15:33.144 00:15:33.144 real 0m7.144s 00:15:33.144 user 0m11.141s 00:15:33.144 sys 0m1.698s 00:15:33.144 15:23:46 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:33.144 15:23:46 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:33.144 ************************************ 00:15:33.144 END TEST bdev_verify 00:15:33.144 ************************************ 00:15:33.144 15:23:46 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:33.144 15:23:46 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:33.144 15:23:46 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:15:33.144 15:23:46 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:33.144 15:23:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:33.144 ************************************ 00:15:33.144 START TEST bdev_verify_big_io 00:15:33.144 ************************************ 00:15:33.144 15:23:46 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:33.144 [2024-07-11 15:23:46.636248] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:33.144 [2024-07-11 15:23:46.636435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76714 ] 00:15:33.402 [2024-07-11 15:23:46.792245] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:33.402 [2024-07-11 15:23:46.970097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.402 [2024-07-11 15:23:46.970110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.968 Running I/O for 5 seconds... 00:15:40.529 00:15:40.529 Latency(us) 00:15:40.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.529 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:40.529 Verification LBA range: start 0x0 length 0xa000 00:15:40.529 nvme0n1 : 6.07 105.44 6.59 0.00 0.00 1142553.13 174444.92 1090519.04 00:15:40.529 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:40.529 Verification LBA range: start 0xa000 length 0xa000 00:15:40.529 nvme0n1 : 6.04 117.89 7.37 0.00 0.00 1054726.61 133455.13 1517575.45 00:15:40.529 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:40.529 Verification LBA range: start 0x0 length 0xbd0b 00:15:40.529 nvme1n1 : 6.07 129.12 8.07 0.00 0.00 930134.42 14358.34 1029510.98 00:15:40.529 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:40.529 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:40.529 nvme1n1 : 6.04 148.30 9.27 0.00 0.00 811168.91 58148.31 926559.88 00:15:40.529 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:40.529 Verification LBA range: start 0x0 length 0x8000 00:15:40.529 nvme2n1 : 6.05 120.36 7.52 0.00 0.00 963850.59 144894.14 1090519.04 00:15:40.529 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:40.529 Verification LBA range: start 0x8000 length 0x8000 00:15:40.529 nvme2n1 : 6.03 112.79 7.05 0.00 0.00 1033550.38 111530.36 926559.88 00:15:40.529 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:40.529 Verification LBA range: start 0x0 length 0x8000 00:15:40.529 nvme2n2 : 6.05 137.46 8.59 0.00 0.00 817949.43 141081.13 808356.77 00:15:40.529 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:40.529 Verification LBA range: start 0x8000 length 0x8000 00:15:40.529 nvme2n2 : 6.03 159.15 9.95 0.00 0.00 709746.44 78643.20 899868.86 00:15:40.529 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:40.529 Verification LBA range: start 0x0 length 0x8000 00:15:40.529 nvme2n3 : 6.07 92.29 5.77 0.00 0.00 1195216.42 9413.35 2897882.76 00:15:40.529 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:40.529 Verification LBA range: start 0x8000 length 0x8000 00:15:40.529 nvme2n3 : 6.04 87.35 5.46 0.00 0.00 1251203.88 30742.34 1891249.80 00:15:40.529 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:40.529 Verification LBA range: start 0x0 length 0x2000 00:15:40.529 nvme3n1 : 6.08 155.37 9.71 0.00 0.00 685030.69 11558.17 892242.85 00:15:40.529 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:40.529 Verification LBA range: start 0x2000 length 0x2000 00:15:40.529 nvme3n1 : 6.05 93.08 5.82 0.00 0.00 1146503.38 10307.03 2867378.73 00:15:40.529 =================================================================================================================== 00:15:40.529 Total : 1458.59 91.16 0.00 0.00 943324.72 9413.35 2897882.76 00:15:41.489 00:15:41.489 real 0m8.347s 00:15:41.489 user 0m15.095s 00:15:41.489 sys 0m0.501s 00:15:41.489 15:23:54 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:41.489 15:23:54 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.489 ************************************ 00:15:41.489 END TEST bdev_verify_big_io 00:15:41.489 ************************************ 00:15:41.489 15:23:54 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:41.489 15:23:54 blockdev_xnvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:41.489 15:23:54 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:15:41.489 15:23:54 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:41.489 15:23:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:41.489 ************************************ 00:15:41.489 START TEST bdev_write_zeroes 00:15:41.489 ************************************ 00:15:41.490 15:23:54 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:41.490 [2024-07-11 15:23:55.033145] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:41.490 [2024-07-11 15:23:55.033308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76825 ] 00:15:41.763 [2024-07-11 15:23:55.188608] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.763 [2024-07-11 15:23:55.360883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.333 Running I/O for 1 seconds... 00:15:43.267 00:15:43.267 Latency(us) 00:15:43.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.267 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:43.268 nvme0n1 : 1.02 12172.87 47.55 0.00 0.00 10503.73 6851.49 17754.30 00:15:43.268 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:43.268 nvme1n1 : 1.02 17140.03 66.95 0.00 0.00 7452.40 4110.89 16205.27 00:15:43.268 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:43.268 nvme2n1 : 1.02 12154.65 47.48 0.00 0.00 10460.58 6642.97 17992.61 00:15:43.268 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:43.268 nvme2n2 : 1.02 12136.71 47.41 0.00 0.00 10468.14 6494.02 18707.55 00:15:43.268 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:43.268 nvme2n3 : 1.02 12118.42 47.34 0.00 0.00 10475.20 6613.18 18707.55 00:15:43.268 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:43.268 nvme3n1 : 1.03 12100.34 47.27 0.00 0.00 10482.69 6702.55 18707.55 00:15:43.268 =================================================================================================================== 00:15:43.268 Total : 77823.02 304.00 0.00 0.00 9814.49 4110.89 18707.55 00:15:44.652 00:15:44.652 real 0m2.914s 00:15:44.652 user 0m2.204s 00:15:44.652 sys 0m0.533s 00:15:44.652 ************************************ 00:15:44.652 END TEST bdev_write_zeroes 00:15:44.652 ************************************ 00:15:44.652 15:23:57 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:44.652 15:23:57 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:44.652 15:23:57 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:15:44.652 15:23:57 blockdev_xnvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:44.652 15:23:57 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:15:44.652 15:23:57 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:44.652 15:23:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:44.652 ************************************ 00:15:44.652 START TEST bdev_json_nonenclosed 00:15:44.652 ************************************ 00:15:44.652 15:23:57 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:44.652 [2024-07-11 15:23:58.017743] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:44.652 [2024-07-11 15:23:58.017950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76879 ] 00:15:44.652 [2024-07-11 15:23:58.189267] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.912 [2024-07-11 15:23:58.356397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.912 [2024-07-11 15:23:58.356514] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:44.912 [2024-07-11 15:23:58.356538] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:44.912 [2024-07-11 15:23:58.356554] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:45.170 00:15:45.171 real 0m0.811s 00:15:45.171 user 0m0.576s 00:15:45.171 sys 0m0.130s 00:15:45.171 15:23:58 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:15:45.171 ************************************ 00:15:45.171 END TEST bdev_json_nonenclosed 00:15:45.171 ************************************ 00:15:45.171 15:23:58 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:45.171 15:23:58 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:45.171 15:23:58 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:15:45.171 15:23:58 blockdev_xnvme -- bdev/blockdev.sh@782 -- # true 00:15:45.171 15:23:58 blockdev_xnvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:45.171 15:23:58 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:15:45.171 15:23:58 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.171 15:23:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:45.171 ************************************ 00:15:45.171 START TEST bdev_json_nonarray 00:15:45.171 ************************************ 00:15:45.171 15:23:58 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:45.430 [2024-07-11 15:23:58.865971] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:45.430 [2024-07-11 15:23:58.866137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76907 ] 00:15:45.430 [2024-07-11 15:23:59.026873] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.688 [2024-07-11 15:23:59.198072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.688 [2024-07-11 15:23:59.198247] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:45.688 [2024-07-11 15:23:59.198286] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:45.688 [2024-07-11 15:23:59.198301] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:46.255 00:15:46.255 real 0m0.792s 00:15:46.255 user 0m0.567s 00:15:46.255 sys 0m0.120s 00:15:46.255 15:23:59 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:15:46.255 ************************************ 00:15:46.255 END TEST bdev_json_nonarray 00:15:46.255 ************************************ 00:15:46.255 15:23:59 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:46.255 15:23:59 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:46.255 15:23:59 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:15:46.255 15:23:59 blockdev_xnvme -- bdev/blockdev.sh@785 -- # true 00:15:46.255 15:23:59 blockdev_xnvme -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:15:46.255 15:23:59 blockdev_xnvme -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:15:46.255 15:23:59 blockdev_xnvme -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:15:46.255 15:23:59 blockdev_xnvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:15:46.255 15:23:59 blockdev_xnvme -- bdev/blockdev.sh@811 -- # cleanup 00:15:46.255 15:23:59 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:46.255 15:23:59 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:46.255 15:23:59 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:46.255 15:23:59 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:46.255 15:23:59 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:46.255 15:23:59 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:46.255 15:23:59 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:46.514 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:47.889 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:47.889 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:47.889 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:47.889 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:48.148 00:15:48.148 real 1m0.622s 00:15:48.148 user 1m44.396s 00:15:48.148 sys 0m26.426s 00:15:48.148 15:24:01 blockdev_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:48.148 ************************************ 00:15:48.148 END TEST blockdev_xnvme 00:15:48.148 ************************************ 00:15:48.148 15:24:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:48.148 15:24:01 -- common/autotest_common.sh@1142 -- # return 0 00:15:48.148 15:24:01 -- spdk/autotest.sh@251 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:48.148 15:24:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:48.148 15:24:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:48.148 15:24:01 -- common/autotest_common.sh@10 -- # set +x 00:15:48.148 ************************************ 00:15:48.148 START TEST ublk 00:15:48.148 ************************************ 00:15:48.148 15:24:01 ublk -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:48.148 * Looking for test storage... 00:15:48.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:48.148 15:24:01 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:48.148 15:24:01 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:48.148 15:24:01 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:48.148 15:24:01 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:48.148 15:24:01 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:48.148 15:24:01 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:48.148 15:24:01 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:48.148 15:24:01 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:48.148 15:24:01 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:48.148 15:24:01 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:48.148 15:24:01 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:48.148 15:24:01 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:48.148 15:24:01 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:48.148 15:24:01 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:48.148 15:24:01 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:48.148 15:24:01 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:48.148 15:24:01 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:48.148 15:24:01 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:48.148 15:24:01 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:48.148 15:24:01 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:48.148 15:24:01 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:48.148 15:24:01 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:48.148 15:24:01 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:48.148 ************************************ 00:15:48.148 START TEST test_save_ublk_config 00:15:48.148 ************************************ 00:15:48.148 15:24:01 ublk.test_save_ublk_config -- common/autotest_common.sh@1123 -- # test_save_config 00:15:48.148 15:24:01 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:48.148 15:24:01 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=77188 00:15:48.148 15:24:01 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:48.148 15:24:01 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:48.148 15:24:01 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 77188 00:15:48.407 15:24:01 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 77188 ']' 00:15:48.407 15:24:01 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.407 15:24:01 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:48.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.407 15:24:01 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.407 15:24:01 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:48.407 15:24:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:48.407 [2024-07-11 15:24:01.893404] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:48.407 [2024-07-11 15:24:01.893585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77188 ] 00:15:48.665 [2024-07-11 15:24:02.065690] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.924 [2024-07-11 15:24:02.301094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.490 15:24:03 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:49.490 15:24:03 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:15:49.490 15:24:03 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:49.490 15:24:03 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:49.490 15:24:03 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.490 15:24:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:49.490 [2024-07-11 15:24:03.065116] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:49.490 [2024-07-11 15:24:03.066343] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:49.748 malloc0 00:15:49.748 [2024-07-11 15:24:03.139248] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:49.748 [2024-07-11 15:24:03.139368] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:49.748 [2024-07-11 15:24:03.139386] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:49.748 [2024-07-11 15:24:03.139397] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:49.748 [2024-07-11 15:24:03.147138] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:49.748 [2024-07-11 15:24:03.147173] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:49.748 [2024-07-11 15:24:03.155166] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:49.748 [2024-07-11 15:24:03.155299] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:49.748 [2024-07-11 15:24:03.179058] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:49.748 0 00:15:49.748 15:24:03 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.748 15:24:03 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:49.748 15:24:03 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.748 15:24:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:50.006 15:24:03 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.007 15:24:03 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:50.007 "subsystems": [ 00:15:50.007 { 00:15:50.007 "subsystem": "keyring", 00:15:50.007 "config": [] 00:15:50.007 }, 00:15:50.007 { 00:15:50.007 "subsystem": "iobuf", 00:15:50.007 "config": [ 00:15:50.007 { 00:15:50.007 "method": "iobuf_set_options", 00:15:50.007 "params": { 00:15:50.007 "small_pool_count": 8192, 00:15:50.007 "large_pool_count": 1024, 00:15:50.007 "small_bufsize": 8192, 00:15:50.007 "large_bufsize": 135168 00:15:50.007 } 00:15:50.007 } 00:15:50.007 ] 00:15:50.007 }, 00:15:50.007 { 00:15:50.007 "subsystem": "sock", 00:15:50.007 "config": [ 00:15:50.007 { 00:15:50.007 "method": "sock_set_default_impl", 00:15:50.007 "params": { 00:15:50.007 "impl_name": "posix" 00:15:50.007 } 00:15:50.007 }, 00:15:50.007 { 00:15:50.007 "method": "sock_impl_set_options", 00:15:50.007 "params": { 00:15:50.007 "impl_name": "ssl", 00:15:50.007 "recv_buf_size": 4096, 00:15:50.007 "send_buf_size": 4096, 00:15:50.007 "enable_recv_pipe": true, 00:15:50.007 "enable_quickack": false, 00:15:50.007 "enable_placement_id": 0, 00:15:50.007 "enable_zerocopy_send_server": true, 00:15:50.007 "enable_zerocopy_send_client": false, 00:15:50.007 "zerocopy_threshold": 0, 00:15:50.007 "tls_version": 0, 00:15:50.007 "enable_ktls": false 00:15:50.007 } 00:15:50.007 }, 00:15:50.007 { 00:15:50.007 "method": "sock_impl_set_options", 00:15:50.007 "params": { 00:15:50.007 "impl_name": "posix", 00:15:50.007 "recv_buf_size": 2097152, 00:15:50.007 "send_buf_size": 2097152, 00:15:50.007 "enable_recv_pipe": true, 00:15:50.007 "enable_quickack": false, 00:15:50.007 "enable_placement_id": 0, 00:15:50.007 "enable_zerocopy_send_server": true, 00:15:50.007 "enable_zerocopy_send_client": false, 00:15:50.007 "zerocopy_threshold": 0, 00:15:50.007 "tls_version": 0, 00:15:50.007 "enable_ktls": false 00:15:50.007 } 00:15:50.007 } 00:15:50.007 ] 00:15:50.007 }, 00:15:50.007 { 00:15:50.007 "subsystem": "vmd", 00:15:50.007 "config": [] 00:15:50.007 }, 00:15:50.007 { 00:15:50.007 "subsystem": "accel", 00:15:50.007 "config": [ 00:15:50.007 { 00:15:50.007 "method": "accel_set_options", 00:15:50.007 "params": { 00:15:50.007 "small_cache_size": 128, 00:15:50.007 "large_cache_size": 16, 00:15:50.007 "task_count": 2048, 00:15:50.007 "sequence_count": 2048, 00:15:50.007 "buf_count": 2048 00:15:50.007 } 00:15:50.007 } 00:15:50.007 ] 00:15:50.007 }, 00:15:50.007 { 00:15:50.007 "subsystem": "bdev", 00:15:50.007 "config": [ 00:15:50.007 { 00:15:50.007 "method": "bdev_set_options", 00:15:50.007 "params": { 00:15:50.007 "bdev_io_pool_size": 65535, 00:15:50.007 "bdev_io_cache_size": 256, 00:15:50.007 "bdev_auto_examine": true, 00:15:50.007 "iobuf_small_cache_size": 128, 00:15:50.007 "iobuf_large_cache_size": 16 00:15:50.007 } 00:15:50.007 }, 00:15:50.007 { 00:15:50.007 "method": "bdev_raid_set_options", 00:15:50.007 "params": { 00:15:50.007 "process_window_size_kb": 1024 00:15:50.007 } 00:15:50.007 }, 00:15:50.007 { 00:15:50.007 "method": "bdev_iscsi_set_options", 00:15:50.007 "params": { 00:15:50.007 "timeout_sec": 30 00:15:50.007 } 00:15:50.007 }, 00:15:50.007 { 00:15:50.007 "method": "bdev_nvme_set_options", 00:15:50.007 "params": { 00:15:50.007 "action_on_timeout": "none", 00:15:50.007 "timeout_us": 0, 00:15:50.007 "timeout_admin_us": 0, 00:15:50.007 "keep_alive_timeout_ms": 10000, 00:15:50.007 "arbitration_burst": 0, 00:15:50.007 "low_priority_weight": 0, 00:15:50.007 "medium_priority_weight": 0, 00:15:50.007 "high_priority_weight": 0, 00:15:50.007 "nvme_adminq_poll_period_us": 10000, 00:15:50.007 "nvme_ioq_poll_period_us": 0, 00:15:50.007 "io_queue_requests": 0, 00:15:50.007 "delay_cmd_submit": true, 00:15:50.007 "transport_retry_count": 4, 00:15:50.007 "bdev_retry_count": 3, 00:15:50.007 "transport_ack_timeout": 0, 00:15:50.007 "ctrlr_loss_timeout_sec": 0, 00:15:50.007 "reconnect_delay_sec": 0, 00:15:50.007 "fast_io_fail_timeout_sec": 0, 00:15:50.007 "disable_auto_failback": false, 00:15:50.007 "generate_uuids": false, 00:15:50.007 "transport_tos": 0, 00:15:50.007 "nvme_error_stat": false, 00:15:50.007 "rdma_srq_size": 0, 00:15:50.007 "io_path_stat": false, 00:15:50.007 "allow_accel_sequence": false, 00:15:50.007 "rdma_max_cq_size": 0, 00:15:50.007 "rdma_cm_event_timeout_ms": 0, 00:15:50.007 "dhchap_digests": [ 00:15:50.007 "sha256", 00:15:50.007 "sha384", 00:15:50.007 "sha512" 00:15:50.007 ], 00:15:50.007 "dhchap_dhgroups": [ 00:15:50.007 "null", 00:15:50.007 "ffdhe2048", 00:15:50.007 "ffdhe3072", 00:15:50.007 "ffdhe4096", 00:15:50.007 "ffdhe6144", 00:15:50.007 "ffdhe8192" 00:15:50.007 ] 00:15:50.007 } 00:15:50.007 }, 00:15:50.007 { 00:15:50.007 "method": "bdev_nvme_set_hotplug", 00:15:50.007 "params": { 00:15:50.007 "period_us": 100000, 00:15:50.007 "enable": false 00:15:50.007 } 00:15:50.007 }, 00:15:50.007 { 00:15:50.007 "method": "bdev_malloc_create", 00:15:50.007 "params": { 00:15:50.007 "name": "malloc0", 00:15:50.007 "num_blocks": 8192, 00:15:50.007 "block_size": 4096, 00:15:50.007 "physical_block_size": 4096, 00:15:50.007 "uuid": "d8a45f9f-72d7-4f41-b87e-6e309f56d051", 00:15:50.007 "optimal_io_boundary": 0 00:15:50.007 } 00:15:50.007 }, 00:15:50.007 { 00:15:50.007 "method": "bdev_wait_for_examine" 00:15:50.007 } 00:15:50.007 ] 00:15:50.007 }, 00:15:50.007 { 00:15:50.007 "subsystem": "scsi", 00:15:50.007 "config": null 00:15:50.007 }, 00:15:50.007 { 00:15:50.007 "subsystem": "scheduler", 00:15:50.007 "config": [ 00:15:50.007 { 00:15:50.007 "method": "framework_set_scheduler", 00:15:50.007 "params": { 00:15:50.007 "name": "static" 00:15:50.007 } 00:15:50.007 } 00:15:50.007 ] 00:15:50.007 }, 00:15:50.007 { 00:15:50.007 "subsystem": "vhost_scsi", 00:15:50.007 "config": [] 00:15:50.007 }, 00:15:50.007 { 00:15:50.007 "subsystem": "vhost_blk", 00:15:50.007 "config": [] 00:15:50.007 }, 00:15:50.007 { 00:15:50.007 "subsystem": "ublk", 00:15:50.007 "config": [ 00:15:50.007 { 00:15:50.007 "method": "ublk_create_target", 00:15:50.007 "params": { 00:15:50.007 "cpumask": "1" 00:15:50.007 } 00:15:50.007 }, 00:15:50.007 { 00:15:50.007 "method": "ublk_start_disk", 00:15:50.007 "params": { 00:15:50.007 "bdev_name": "malloc0", 00:15:50.007 "ublk_id": 0, 00:15:50.008 "num_queues": 1, 00:15:50.008 "queue_depth": 128 00:15:50.008 } 00:15:50.008 } 00:15:50.008 ] 00:15:50.008 }, 00:15:50.008 { 00:15:50.008 "subsystem": "nbd", 00:15:50.008 "config": [] 00:15:50.008 }, 00:15:50.008 { 00:15:50.008 "subsystem": "nvmf", 00:15:50.008 "config": [ 00:15:50.008 { 00:15:50.008 "method": "nvmf_set_config", 00:15:50.008 "params": { 00:15:50.008 "discovery_filter": "match_any", 00:15:50.008 "admin_cmd_passthru": { 00:15:50.008 "identify_ctrlr": false 00:15:50.008 } 00:15:50.008 } 00:15:50.008 }, 00:15:50.008 { 00:15:50.008 "method": "nvmf_set_max_subsystems", 00:15:50.008 "params": { 00:15:50.008 "max_subsystems": 1024 00:15:50.008 } 00:15:50.008 }, 00:15:50.008 { 00:15:50.008 "method": "nvmf_set_crdt", 00:15:50.008 "params": { 00:15:50.008 "crdt1": 0, 00:15:50.008 "crdt2": 0, 00:15:50.008 "crdt3": 0 00:15:50.008 } 00:15:50.008 } 00:15:50.008 ] 00:15:50.008 }, 00:15:50.008 { 00:15:50.008 "subsystem": "iscsi", 00:15:50.008 "config": [ 00:15:50.008 { 00:15:50.008 "method": "iscsi_set_options", 00:15:50.008 "params": { 00:15:50.008 "node_base": "iqn.2016-06.io.spdk", 00:15:50.008 "max_sessions": 128, 00:15:50.008 "max_connections_per_session": 2, 00:15:50.008 "max_queue_depth": 64, 00:15:50.008 "default_time2wait": 2, 00:15:50.008 "default_time2retain": 20, 00:15:50.008 "first_burst_length": 8192, 00:15:50.008 "immediate_data": true, 00:15:50.008 "allow_duplicated_isid": false, 00:15:50.008 "error_recovery_level": 0, 00:15:50.008 "nop_timeout": 60, 00:15:50.008 "nop_in_interval": 30, 00:15:50.008 "disable_chap": false, 00:15:50.008 "require_chap": false, 00:15:50.008 "mutual_chap": false, 00:15:50.008 "chap_group": 0, 00:15:50.008 "max_large_datain_per_connection": 64, 00:15:50.008 "max_r2t_per_connection": 4, 00:15:50.008 "pdu_pool_size": 36864, 00:15:50.008 "immediate_data_pool_size": 16384, 00:15:50.008 "data_out_pool_size": 2048 00:15:50.008 } 00:15:50.008 } 00:15:50.008 ] 00:15:50.008 } 00:15:50.008 ] 00:15:50.008 }' 00:15:50.008 15:24:03 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 77188 00:15:50.008 15:24:03 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 77188 ']' 00:15:50.008 15:24:03 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 77188 00:15:50.008 15:24:03 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:15:50.008 15:24:03 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:50.008 15:24:03 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77188 00:15:50.008 killing process with pid 77188 00:15:50.008 15:24:03 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:50.008 15:24:03 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:50.008 15:24:03 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77188' 00:15:50.008 15:24:03 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 77188 00:15:50.008 15:24:03 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 77188 00:15:51.383 [2024-07-11 15:24:04.661957] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:51.383 [2024-07-11 15:24:04.694140] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:51.383 [2024-07-11 15:24:04.694387] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:51.383 [2024-07-11 15:24:04.703082] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:51.383 [2024-07-11 15:24:04.703153] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:51.383 [2024-07-11 15:24:04.703167] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:51.383 [2024-07-11 15:24:04.703204] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:51.383 [2024-07-11 15:24:04.703390] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:52.319 15:24:05 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=77243 00:15:52.319 15:24:05 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 77243 00:15:52.319 15:24:05 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 77243 ']' 00:15:52.319 15:24:05 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.319 15:24:05 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.319 15:24:05 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:52.319 15:24:05 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.319 15:24:05 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:52.319 "subsystems": [ 00:15:52.319 { 00:15:52.320 "subsystem": "keyring", 00:15:52.320 "config": [] 00:15:52.320 }, 00:15:52.320 { 00:15:52.320 "subsystem": "iobuf", 00:15:52.320 "config": [ 00:15:52.320 { 00:15:52.320 "method": "iobuf_set_options", 00:15:52.320 "params": { 00:15:52.320 "small_pool_count": 8192, 00:15:52.320 "large_pool_count": 1024, 00:15:52.320 "small_bufsize": 8192, 00:15:52.320 "large_bufsize": 135168 00:15:52.320 } 00:15:52.320 } 00:15:52.320 ] 00:15:52.320 }, 00:15:52.320 { 00:15:52.320 "subsystem": "sock", 00:15:52.320 "config": [ 00:15:52.320 { 00:15:52.320 "method": "sock_set_default_impl", 00:15:52.320 "params": { 00:15:52.320 "impl_name": "posix" 00:15:52.320 } 00:15:52.320 }, 00:15:52.320 { 00:15:52.320 "method": "sock_impl_set_options", 00:15:52.320 "params": { 00:15:52.320 "impl_name": "ssl", 00:15:52.320 "recv_buf_size": 4096, 00:15:52.320 "send_buf_size": 4096, 00:15:52.320 "enable_recv_pipe": true, 00:15:52.320 "enable_quickack": false, 00:15:52.320 "enable_placement_id": 0, 00:15:52.320 "enable_zerocopy_send_server": true, 00:15:52.320 "enable_zerocopy_send_client": false, 00:15:52.320 "zerocopy_threshold": 0, 00:15:52.320 "tls_version": 0, 00:15:52.320 "enable_ktls": false 00:15:52.320 } 00:15:52.320 }, 00:15:52.320 { 00:15:52.320 "method": "sock_impl_set_options", 00:15:52.320 "params": { 00:15:52.320 "impl_name": "posix", 00:15:52.320 "recv_buf_size": 2097152, 00:15:52.320 "send_buf_size": 2097152, 00:15:52.320 "enable_recv_pipe": true, 00:15:52.320 "enable_quickack": false, 00:15:52.320 "enable_placement_id": 0, 00:15:52.320 "enable_zerocopy_send_server": true, 00:15:52.320 "enable_zerocopy_send_client": false, 00:15:52.320 "zerocopy_threshold": 0, 00:15:52.320 "tls_version": 0, 00:15:52.320 "enable_ktls": false 00:15:52.320 } 00:15:52.320 } 00:15:52.320 ] 00:15:52.320 }, 00:15:52.320 { 00:15:52.320 "subsystem": "vmd", 00:15:52.320 "config": [] 00:15:52.320 }, 00:15:52.320 { 00:15:52.320 "subsystem": "accel", 00:15:52.320 "config": [ 00:15:52.320 { 00:15:52.320 "method": "accel_set_options", 00:15:52.320 "params": { 00:15:52.320 "small_cache_size": 128, 00:15:52.320 "large_cache_size": 16, 00:15:52.320 "task_count": 2048, 00:15:52.320 "sequence_count": 2048, 00:15:52.320 "buf_count": 2048 00:15:52.320 } 00:15:52.320 } 00:15:52.320 ] 00:15:52.320 }, 00:15:52.320 { 00:15:52.320 "subsystem": "bdev", 00:15:52.320 "config": [ 00:15:52.320 { 00:15:52.320 "method": "bdev_set_options", 00:15:52.320 "params": { 00:15:52.320 "bdev_io_pool_size": 65535, 00:15:52.320 "bdev_io_cache_size": 256, 00:15:52.320 "bdev_auto_examine": true, 00:15:52.320 "iobuf_small_cache_size": 128, 00:15:52.320 "iobuf_large_cache_size": 16 00:15:52.320 } 00:15:52.320 }, 00:15:52.320 { 00:15:52.320 "method": "bdev_raid_set_options", 00:15:52.320 "params": { 00:15:52.320 "process_window_size_kb": 1024 00:15:52.320 } 00:15:52.320 }, 00:15:52.320 { 00:15:52.320 "method": "bdev_iscsi_set_options", 00:15:52.320 "params": { 00:15:52.320 "timeout_sec": 30 00:15:52.320 } 00:15:52.320 }, 00:15:52.320 { 00:15:52.320 "method": "bdev_nvme_set_options", 00:15:52.320 "params": { 00:15:52.320 "action_on_timeout": "none", 00:15:52.320 "timeout_us": 0, 00:15:52.320 "timeout_admin_us": 0, 00:15:52.320 "keep_alive_timeout_ms": 10000, 00:15:52.320 "arbitration_burst": 0, 00:15:52.320 "low_priority_weight": 0, 00:15:52.320 "medium_priority_weight": 0, 00:15:52.320 "high_priority_weight": 0, 00:15:52.320 "nvme_adminq_poll_period_us": 10000, 00:15:52.320 "nvme_ioq_poll_period_us": 0, 00:15:52.320 "io_queue_requests": 0, 00:15:52.320 "delay_cmd_submit": true, 00:15:52.320 "transport_retry_count": 4, 00:15:52.320 "bdev_retry_count": 3, 00:15:52.320 "transport_ack_timeout": 0, 00:15:52.320 "ctrlr_loss_timeout_sec": 0, 00:15:52.320 "reconnect_delay_sec": 0, 00:15:52.320 "fast_io_fail_timeout_sec": 0, 00:15:52.320 "disable_auto_failback": false, 00:15:52.320 "generate_uuids": false, 00:15:52.320 "transport_tos": 0, 00:15:52.320 "nvme_error_stat": false, 00:15:52.320 "rdma_srq_size": 0, 00:15:52.320 "io_path_stat": false, 00:15:52.320 "allow_accel_sequence": false, 00:15:52.320 "rdma_max_cq_size": 0, 00:15:52.320 "rdma_cm_event_timeout_ms": 0, 00:15:52.320 "dhchap_digests": [ 00:15:52.320 "sha256", 00:15:52.320 "sha384", 00:15:52.320 "sha512" 00:15:52.320 ], 00:15:52.320 "dhchap_dhgroups": [ 00:15:52.320 "null", 00:15:52.320 "ffdhe2048", 00:15:52.320 "ffdhe3072", 00:15:52.320 "ffdhe4096", 00:15:52.320 "ffdhe6144", 00:15:52.320 "ffdhe8192" 00:15:52.320 ] 00:15:52.320 } 00:15:52.320 }, 00:15:52.320 { 00:15:52.320 "method": "bdev_nvme_set_hotplug", 00:15:52.320 "params": { 00:15:52.320 "period_us": 100000, 00:15:52.320 "enable": false 00:15:52.320 } 00:15:52.320 }, 00:15:52.320 { 00:15:52.320 "method": "bdev_malloc_create", 00:15:52.320 "params": { 00:15:52.320 "name": "malloc0", 00:15:52.320 "num_blocks": 8192, 00:15:52.320 "block_size": 4096, 00:15:52.320 "physical_block_size": 4096, 00:15:52.320 "uuid": "d8a45f9f-72d7-4f41-b87e-6e309f56d051", 00:15:52.320 "optimal_io_boundary": 0 00:15:52.320 } 00:15:52.320 }, 00:15:52.320 { 00:15:52.320 "method": "bdev_wait_for_examine" 00:15:52.320 } 00:15:52.320 ] 00:15:52.320 }, 00:15:52.320 { 00:15:52.320 "subsystem": "scsi", 00:15:52.320 "config": null 00:15:52.320 }, 00:15:52.320 { 00:15:52.320 "subsystem": "scheduler", 00:15:52.320 "config": [ 00:15:52.320 { 00:15:52.320 "method": "framework_set_scheduler", 00:15:52.320 "params": { 00:15:52.320 "name": "static" 00:15:52.320 } 00:15:52.320 } 00:15:52.320 ] 00:15:52.320 }, 00:15:52.320 { 00:15:52.320 "subsystem": "vhost_scsi", 00:15:52.320 "config": [] 00:15:52.320 }, 00:15:52.320 { 00:15:52.320 "subsystem": "vhost_blk", 00:15:52.320 "config": [] 00:15:52.320 }, 00:15:52.321 { 00:15:52.321 "subsystem": "ublk", 00:15:52.321 "config": [ 00:15:52.321 { 00:15:52.321 "method": "ublk_create_target", 00:15:52.321 "params": { 00:15:52.321 "cpumask": "1" 00:15:52.321 } 00:15:52.321 }, 00:15:52.321 { 00:15:52.321 "method": "ublk_start_disk", 00:15:52.321 "params": { 00:15:52.321 "bdev_name": "malloc0", 00:15:52.321 "ublk_id": 0, 00:15:52.321 "num_queues": 1, 00:15:52.321 "queue_depth": 128 00:15:52.321 } 00:15:52.321 } 00:15:52.321 ] 00:15:52.321 }, 00:15:52.321 { 00:15:52.321 "subsystem": "nbd", 00:15:52.321 "config": [] 00:15:52.321 }, 00:15:52.321 { 00:15:52.321 "subsystem": "nvmf", 00:15:52.321 "config": [ 00:15:52.321 { 00:15:52.321 "method": "nvmf_set_config", 00:15:52.321 "params": { 00:15:52.321 "discovery_filter": "match_any", 00:15:52.321 "admin_cmd_passthru": { 00:15:52.321 "identify_ctrlr": false 00:15:52.321 } 00:15:52.321 } 00:15:52.321 }, 00:15:52.321 { 00:15:52.321 "method": "nvmf_set_max_subsystems", 00:15:52.321 "params": { 00:15:52.321 "max_subsystems": 1024 00:15:52.321 } 00:15:52.321 }, 00:15:52.321 { 00:15:52.321 "method": "nvmf_set_crdt", 00:15:52.321 "params": { 00:15:52.321 "crdt1": 0, 00:15:52.321 "crdt2": 0, 00:15:52.321 "crdt3": 0 00:15:52.321 } 00:15:52.321 } 00:15:52.321 ] 00:15:52.321 }, 00:15:52.321 { 00:15:52.321 "subsystem": "iscsi", 00:15:52.321 "config": [ 00:15:52.321 { 00:15:52.321 "method": "iscsi_set_options", 00:15:52.321 "params": { 00:15:52.321 "node_base": "iqn.2016-06.io.spdk", 00:15:52.321 "max_sessions": 128, 00:15:52.321 "max_connections_per_session": 2, 00:15:52.321 "max_queue_depth": 64, 00:15:52.321 "default_time2wait": 2, 00:15:52.321 "default_time2retain": 20, 00:15:52.321 "first_burst_length": 8192, 00:15:52.321 "immediate_data": true, 00:15:52.321 "allow_duplicated_isid": false, 00:15:52.321 "error_recovery_level": 0, 00:15:52.321 "nop_timeout": 60, 00:15:52.321 "nop_in_interval": 30, 00:15:52.321 "disable_chap": false, 00:15:52.321 "require_chap": false, 00:15:52.321 "mutual_chap": false, 00:15:52.321 "chap_group": 0, 00:15:52.321 "max_large_datain_per_connection": 64, 00:15:52.321 "max_r2t_per_connection": 4, 00:15:52.321 "pdu_pool_size": 36864, 00:15:52.321 "immediate_data_pool_size": 16384, 00:15:52.321 "data_out_pool_size": 2048 00:15:52.321 } 00:15:52.321 } 00:15:52.321 ] 00:15:52.321 } 00:15:52.321 ] 00:15:52.321 }' 00:15:52.321 15:24:05 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.321 15:24:05 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:52.579 [2024-07-11 15:24:05.953445] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:52.579 [2024-07-11 15:24:05.953621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77243 ] 00:15:52.579 [2024-07-11 15:24:06.116620] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.837 [2024-07-11 15:24:06.298998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.771 [2024-07-11 15:24:07.072099] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:53.771 [2024-07-11 15:24:07.073202] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:53.771 [2024-07-11 15:24:07.079252] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:53.771 [2024-07-11 15:24:07.079369] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:53.771 [2024-07-11 15:24:07.079385] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:53.771 [2024-07-11 15:24:07.079394] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:53.771 [2024-07-11 15:24:07.088191] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:53.771 [2024-07-11 15:24:07.088219] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:53.771 [2024-07-11 15:24:07.094825] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:53.771 [2024-07-11 15:24:07.094959] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:53.771 [2024-07-11 15:24:07.111070] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:53.771 15:24:07 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.771 15:24:07 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:15:53.771 15:24:07 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:53.771 15:24:07 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.771 15:24:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:53.771 15:24:07 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:53.771 15:24:07 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.771 15:24:07 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:53.771 15:24:07 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:53.771 15:24:07 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 77243 00:15:53.771 15:24:07 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 77243 ']' 00:15:53.771 15:24:07 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 77243 00:15:53.771 15:24:07 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:15:53.771 15:24:07 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:53.771 15:24:07 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77243 00:15:53.771 killing process with pid 77243 00:15:53.771 15:24:07 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:53.771 15:24:07 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:53.771 15:24:07 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77243' 00:15:53.771 15:24:07 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 77243 00:15:53.771 15:24:07 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 77243 00:15:55.673 [2024-07-11 15:24:08.920809] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:55.673 [2024-07-11 15:24:08.959104] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:55.673 [2024-07-11 15:24:08.959290] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:55.673 [2024-07-11 15:24:08.968150] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:55.673 [2024-07-11 15:24:08.968213] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:55.673 [2024-07-11 15:24:08.968226] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:55.673 [2024-07-11 15:24:08.968256] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:55.673 [2024-07-11 15:24:08.971334] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:56.649 15:24:10 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:56.649 ************************************ 00:15:56.649 END TEST test_save_ublk_config 00:15:56.649 ************************************ 00:15:56.649 00:15:56.649 real 0m8.352s 00:15:56.649 user 0m6.919s 00:15:56.649 sys 0m2.328s 00:15:56.649 15:24:10 ublk.test_save_ublk_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:56.649 15:24:10 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:56.649 15:24:10 ublk -- common/autotest_common.sh@1142 -- # return 0 00:15:56.649 15:24:10 ublk -- ublk/ublk.sh@139 -- # spdk_pid=77321 00:15:56.649 15:24:10 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:56.649 15:24:10 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:56.649 15:24:10 ublk -- ublk/ublk.sh@141 -- # waitforlisten 77321 00:15:56.649 15:24:10 ublk -- common/autotest_common.sh@829 -- # '[' -z 77321 ']' 00:15:56.649 15:24:10 ublk -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.649 15:24:10 ublk -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.649 15:24:10 ublk -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.649 15:24:10 ublk -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.649 15:24:10 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:56.649 [2024-07-11 15:24:10.256454] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:56.649 [2024-07-11 15:24:10.256625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77321 ] 00:15:56.907 [2024-07-11 15:24:10.414040] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:57.165 [2024-07-11 15:24:10.577070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.165 [2024-07-11 15:24:10.577082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.732 15:24:11 ublk -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:57.732 15:24:11 ublk -- common/autotest_common.sh@862 -- # return 0 00:15:57.732 15:24:11 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:57.732 15:24:11 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:57.732 15:24:11 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.732 15:24:11 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:57.732 ************************************ 00:15:57.732 START TEST test_create_ublk 00:15:57.732 ************************************ 00:15:57.732 15:24:11 ublk.test_create_ublk -- common/autotest_common.sh@1123 -- # test_create_ublk 00:15:57.732 15:24:11 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:57.732 15:24:11 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.732 15:24:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:57.732 [2024-07-11 15:24:11.245147] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:57.732 [2024-07-11 15:24:11.247910] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:57.732 15:24:11 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.732 15:24:11 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:57.732 15:24:11 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:57.732 15:24:11 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.732 15:24:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:57.991 15:24:11 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.991 15:24:11 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:57.991 15:24:11 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:57.992 15:24:11 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.992 15:24:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:57.992 [2024-07-11 15:24:11.475269] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:57.992 [2024-07-11 15:24:11.475747] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:57.992 [2024-07-11 15:24:11.475774] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:57.992 [2024-07-11 15:24:11.475787] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:57.992 [2024-07-11 15:24:11.479388] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:57.992 [2024-07-11 15:24:11.479422] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:57.992 [2024-07-11 15:24:11.489141] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:57.992 [2024-07-11 15:24:11.495307] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:57.992 [2024-07-11 15:24:11.507758] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:57.992 15:24:11 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.992 15:24:11 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:57.992 15:24:11 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:57.992 15:24:11 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:57.992 15:24:11 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.992 15:24:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:57.992 15:24:11 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.992 15:24:11 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:57.992 { 00:15:57.992 "ublk_device": "/dev/ublkb0", 00:15:57.992 "id": 0, 00:15:57.992 "queue_depth": 512, 00:15:57.992 "num_queues": 4, 00:15:57.992 "bdev_name": "Malloc0" 00:15:57.992 } 00:15:57.992 ]' 00:15:57.992 15:24:11 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:57.992 15:24:11 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:57.992 15:24:11 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:58.251 15:24:11 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:58.251 15:24:11 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:58.251 15:24:11 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:58.251 15:24:11 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:58.251 15:24:11 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:58.251 15:24:11 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:58.251 15:24:11 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:58.251 15:24:11 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:58.251 15:24:11 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:58.251 15:24:11 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:58.251 15:24:11 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:58.251 15:24:11 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:58.251 15:24:11 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:58.251 15:24:11 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:58.251 15:24:11 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:58.251 15:24:11 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:58.251 15:24:11 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:58.251 15:24:11 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:58.251 15:24:11 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:58.510 fio: verification read phase will never start because write phase uses all of runtime 00:15:58.510 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:58.510 fio-3.35 00:15:58.510 Starting 1 process 00:16:08.484 00:16:08.484 fio_test: (groupid=0, jobs=1): err= 0: pid=77372: Thu Jul 11 15:24:22 2024 00:16:08.484 write: IOPS=10.6k, BW=41.5MiB/s (43.5MB/s)(415MiB/10001msec); 0 zone resets 00:16:08.484 clat (usec): min=62, max=8460, avg=92.88, stdev=167.06 00:16:08.484 lat (usec): min=63, max=8478, avg=93.53, stdev=167.07 00:16:08.484 clat percentiles (usec): 00:16:08.484 | 1.00th=[ 71], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 75], 00:16:08.484 | 30.00th=[ 76], 40.00th=[ 77], 50.00th=[ 79], 60.00th=[ 81], 00:16:08.484 | 70.00th=[ 86], 80.00th=[ 93], 90.00th=[ 102], 95.00th=[ 111], 00:16:08.484 | 99.00th=[ 130], 99.50th=[ 147], 99.90th=[ 3359], 99.95th=[ 3654], 00:16:08.484 | 99.99th=[ 4047] 00:16:08.484 bw ( KiB/s): min=18936, max=44600, per=99.91%, avg=42423.21, stdev=5714.65, samples=19 00:16:08.484 iops : min= 4734, max=11150, avg=10605.79, stdev=1428.66, samples=19 00:16:08.484 lat (usec) : 100=88.41%, 250=11.17%, 500=0.01%, 750=0.02%, 1000=0.03% 00:16:08.484 lat (msec) : 2=0.12%, 4=0.23%, 10=0.01% 00:16:08.484 cpu : usr=2.69%, sys=6.98%, ctx=106162, majf=0, minf=797 00:16:08.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.484 issued rwts: total=0,106160,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:08.484 00:16:08.484 Run status group 0 (all jobs): 00:16:08.484 WRITE: bw=41.5MiB/s (43.5MB/s), 41.5MiB/s-41.5MiB/s (43.5MB/s-43.5MB/s), io=415MiB (435MB), run=10001-10001msec 00:16:08.484 00:16:08.484 Disk stats (read/write): 00:16:08.484 ublkb0: ios=0/104985, merge=0/0, ticks=0/8986, in_queue=8986, util=99.12% 00:16:08.484 15:24:22 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:08.484 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.484 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.484 [2024-07-11 15:24:22.037538] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:08.484 [2024-07-11 15:24:22.071612] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:08.484 [2024-07-11 15:24:22.073183] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:08.484 [2024-07-11 15:24:22.081147] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:08.484 [2024-07-11 15:24:22.081530] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:08.484 [2024-07-11 15:24:22.081548] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:08.484 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.484 15:24:22 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:08.484 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:16:08.484 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:08.484 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:08.484 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:08.484 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:08.484 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:08.484 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:16:08.484 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.484 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.484 [2024-07-11 15:24:22.097154] ublk.c:1071:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:08.743 request: 00:16:08.743 { 00:16:08.743 "ublk_id": 0, 00:16:08.743 "method": "ublk_stop_disk", 00:16:08.743 "req_id": 1 00:16:08.743 } 00:16:08.743 Got JSON-RPC error response 00:16:08.743 response: 00:16:08.743 { 00:16:08.743 "code": -19, 00:16:08.743 "message": "No such device" 00:16:08.743 } 00:16:08.743 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:08.743 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:16:08.743 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:08.743 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:08.743 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:08.743 15:24:22 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:08.743 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.743 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.743 [2024-07-11 15:24:22.111248] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:08.743 [2024-07-11 15:24:22.117068] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:08.743 [2024-07-11 15:24:22.117130] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:08.743 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.743 15:24:22 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:08.743 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.743 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.002 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.002 15:24:22 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:09.002 15:24:22 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:09.002 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.002 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.002 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.002 15:24:22 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:09.002 15:24:22 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:09.002 15:24:22 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:09.002 15:24:22 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:09.002 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.002 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.002 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.002 15:24:22 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:09.002 15:24:22 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:09.002 ************************************ 00:16:09.002 END TEST test_create_ublk 00:16:09.002 ************************************ 00:16:09.002 15:24:22 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:09.002 00:16:09.002 real 0m11.288s 00:16:09.002 user 0m0.718s 00:16:09.002 sys 0m0.801s 00:16:09.002 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:09.002 15:24:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.002 15:24:22 ublk -- common/autotest_common.sh@1142 -- # return 0 00:16:09.002 15:24:22 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:09.002 15:24:22 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:09.002 15:24:22 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:09.002 15:24:22 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.002 ************************************ 00:16:09.002 START TEST test_create_multi_ublk 00:16:09.002 ************************************ 00:16:09.002 15:24:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@1123 -- # test_create_multi_ublk 00:16:09.002 15:24:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:09.002 15:24:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.002 15:24:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.002 [2024-07-11 15:24:22.591137] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:09.002 [2024-07-11 15:24:22.593481] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:09.002 15:24:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.002 15:24:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:09.002 15:24:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:09.002 15:24:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:09.002 15:24:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:09.002 15:24:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.002 15:24:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.261 15:24:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.261 15:24:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:09.261 15:24:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:09.261 15:24:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.261 15:24:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.261 [2024-07-11 15:24:22.815282] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:09.262 [2024-07-11 15:24:22.815813] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:09.262 [2024-07-11 15:24:22.815838] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:09.262 [2024-07-11 15:24:22.815848] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:09.262 [2024-07-11 15:24:22.824359] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:09.262 [2024-07-11 15:24:22.824382] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:09.262 [2024-07-11 15:24:22.831148] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:09.262 [2024-07-11 15:24:22.831874] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:09.262 [2024-07-11 15:24:22.845213] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:09.262 15:24:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.262 15:24:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:09.262 15:24:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:09.262 15:24:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:09.262 15:24:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.262 15:24:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.523 15:24:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.523 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:09.523 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:09.523 15:24:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.523 15:24:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.523 [2024-07-11 15:24:23.088344] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:09.523 [2024-07-11 15:24:23.088899] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:09.523 [2024-07-11 15:24:23.088922] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:09.523 [2024-07-11 15:24:23.088934] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:09.523 [2024-07-11 15:24:23.100421] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:09.523 [2024-07-11 15:24:23.100453] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:09.523 [2024-07-11 15:24:23.105094] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:09.523 [2024-07-11 15:24:23.105823] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:09.523 [2024-07-11 15:24:23.117137] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:09.523 15:24:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.523 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:09.523 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:09.523 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:09.523 15:24:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.523 15:24:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.782 15:24:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.782 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:09.782 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:09.782 15:24:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.782 15:24:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.782 [2024-07-11 15:24:23.355293] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:09.782 [2024-07-11 15:24:23.355813] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:09.782 [2024-07-11 15:24:23.355840] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:09.782 [2024-07-11 15:24:23.355849] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:09.782 [2024-07-11 15:24:23.363071] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:09.782 [2024-07-11 15:24:23.363096] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:09.782 [2024-07-11 15:24:23.371140] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:09.782 [2024-07-11 15:24:23.371930] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:09.782 [2024-07-11 15:24:23.388149] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:09.782 15:24:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.782 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:09.782 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.041 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:10.041 15:24:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.041 15:24:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.041 15:24:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.041 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:10.041 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:10.041 15:24:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.041 15:24:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.041 [2024-07-11 15:24:23.632268] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:10.041 [2024-07-11 15:24:23.632707] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:10.041 [2024-07-11 15:24:23.632721] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:10.041 [2024-07-11 15:24:23.632731] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:10.041 [2024-07-11 15:24:23.640359] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:10.041 [2024-07-11 15:24:23.640393] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:10.041 [2024-07-11 15:24:23.647093] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:10.041 [2024-07-11 15:24:23.647831] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:10.041 [2024-07-11 15:24:23.656161] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:10.300 15:24:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.300 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:10.300 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:10.300 15:24:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.300 15:24:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.300 15:24:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.300 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:10.300 { 00:16:10.300 "ublk_device": "/dev/ublkb0", 00:16:10.300 "id": 0, 00:16:10.300 "queue_depth": 512, 00:16:10.300 "num_queues": 4, 00:16:10.300 "bdev_name": "Malloc0" 00:16:10.300 }, 00:16:10.300 { 00:16:10.300 "ublk_device": "/dev/ublkb1", 00:16:10.300 "id": 1, 00:16:10.300 "queue_depth": 512, 00:16:10.300 "num_queues": 4, 00:16:10.300 "bdev_name": "Malloc1" 00:16:10.300 }, 00:16:10.300 { 00:16:10.300 "ublk_device": "/dev/ublkb2", 00:16:10.300 "id": 2, 00:16:10.300 "queue_depth": 512, 00:16:10.300 "num_queues": 4, 00:16:10.300 "bdev_name": "Malloc2" 00:16:10.300 }, 00:16:10.300 { 00:16:10.300 "ublk_device": "/dev/ublkb3", 00:16:10.300 "id": 3, 00:16:10.300 "queue_depth": 512, 00:16:10.300 "num_queues": 4, 00:16:10.300 "bdev_name": "Malloc3" 00:16:10.300 } 00:16:10.300 ]' 00:16:10.300 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:10.300 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.300 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:10.300 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:10.300 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:10.300 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:10.300 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:10.300 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:10.300 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:10.300 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:10.300 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:10.559 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:10.559 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.559 15:24:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:10.559 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:10.559 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:10.559 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:10.559 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:10.559 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:10.559 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:10.559 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:10.559 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:10.817 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:10.817 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.817 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:10.817 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:10.817 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:10.817 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:10.817 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:10.817 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:10.817 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:10.817 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:10.817 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:11.076 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:11.076 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.076 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:11.076 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:11.076 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:11.076 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:11.076 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:11.076 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:11.076 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:11.076 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:11.076 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:11.335 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:11.335 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:11.335 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:11.335 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.335 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:11.335 15:24:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.335 15:24:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:11.335 [2024-07-11 15:24:24.723439] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:11.335 [2024-07-11 15:24:24.753413] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:11.335 [2024-07-11 15:24:24.756342] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:11.335 [2024-07-11 15:24:24.762072] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:11.335 [2024-07-11 15:24:24.762465] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:11.335 [2024-07-11 15:24:24.762488] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:11.335 15:24:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.335 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.335 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:11.335 15:24:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.335 15:24:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:11.335 [2024-07-11 15:24:24.769446] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:11.335 [2024-07-11 15:24:24.815188] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:11.335 [2024-07-11 15:24:24.816520] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:11.335 [2024-07-11 15:24:24.823067] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:11.335 [2024-07-11 15:24:24.823405] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:11.335 [2024-07-11 15:24:24.823424] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:11.335 15:24:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.335 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.335 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:11.335 15:24:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.335 15:24:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:11.335 [2024-07-11 15:24:24.838177] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:11.335 [2024-07-11 15:24:24.883140] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:11.335 [2024-07-11 15:24:24.888311] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:11.335 [2024-07-11 15:24:24.898068] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:11.335 [2024-07-11 15:24:24.898455] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:11.335 [2024-07-11 15:24:24.898476] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:11.335 15:24:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.335 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.335 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:11.335 15:24:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.335 15:24:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:11.335 [2024-07-11 15:24:24.902209] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:11.594 [2024-07-11 15:24:24.951186] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:11.594 [2024-07-11 15:24:24.952258] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:11.594 [2024-07-11 15:24:24.959159] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:11.594 [2024-07-11 15:24:24.959505] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:11.594 [2024-07-11 15:24:24.959524] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:11.594 15:24:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.594 15:24:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:11.852 [2024-07-11 15:24:25.251254] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:11.852 [2024-07-11 15:24:25.258151] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:11.852 [2024-07-11 15:24:25.258248] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:11.852 15:24:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:11.852 15:24:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:11.852 15:24:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:11.852 15:24:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.852 15:24:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:12.110 15:24:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.110 15:24:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:12.110 15:24:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:12.110 15:24:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.110 15:24:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:12.368 15:24:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.368 15:24:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:12.368 15:24:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:12.368 15:24:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.368 15:24:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:12.626 15:24:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.626 15:24:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:12.626 15:24:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:12.626 15:24:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.626 15:24:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:12.883 15:24:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.883 15:24:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:12.883 15:24:26 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:12.883 15:24:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.883 15:24:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:12.883 15:24:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.883 15:24:26 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:12.883 15:24:26 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:12.883 15:24:26 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:12.883 15:24:26 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:12.883 15:24:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.883 15:24:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:12.883 15:24:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.883 15:24:26 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:12.883 15:24:26 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:13.141 ************************************ 00:16:13.141 END TEST test_create_multi_ublk 00:16:13.141 ************************************ 00:16:13.141 15:24:26 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:13.141 00:16:13.141 real 0m3.936s 00:16:13.141 user 0m1.346s 00:16:13.141 sys 0m0.146s 00:16:13.141 15:24:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:13.141 15:24:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:13.141 15:24:26 ublk -- common/autotest_common.sh@1142 -- # return 0 00:16:13.141 15:24:26 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:13.141 15:24:26 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:13.141 15:24:26 ublk -- ublk/ublk.sh@130 -- # killprocess 77321 00:16:13.141 15:24:26 ublk -- common/autotest_common.sh@948 -- # '[' -z 77321 ']' 00:16:13.141 15:24:26 ublk -- common/autotest_common.sh@952 -- # kill -0 77321 00:16:13.141 15:24:26 ublk -- common/autotest_common.sh@953 -- # uname 00:16:13.141 15:24:26 ublk -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:13.141 15:24:26 ublk -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77321 00:16:13.141 killing process with pid 77321 00:16:13.141 15:24:26 ublk -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:13.141 15:24:26 ublk -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:13.141 15:24:26 ublk -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77321' 00:16:13.141 15:24:26 ublk -- common/autotest_common.sh@967 -- # kill 77321 00:16:13.141 15:24:26 ublk -- common/autotest_common.sh@972 -- # wait 77321 00:16:14.074 [2024-07-11 15:24:27.451032] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:14.074 [2024-07-11 15:24:27.451127] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:15.035 00:16:15.035 real 0m26.878s 00:16:15.035 user 0m40.623s 00:16:15.035 sys 0m7.980s 00:16:15.035 15:24:28 ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:15.035 ************************************ 00:16:15.035 END TEST ublk 00:16:15.035 ************************************ 00:16:15.035 15:24:28 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.035 15:24:28 -- common/autotest_common.sh@1142 -- # return 0 00:16:15.035 15:24:28 -- spdk/autotest.sh@252 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:15.035 15:24:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:15.035 15:24:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.035 15:24:28 -- common/autotest_common.sh@10 -- # set +x 00:16:15.035 ************************************ 00:16:15.035 START TEST ublk_recovery 00:16:15.035 ************************************ 00:16:15.035 15:24:28 ublk_recovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:15.292 * Looking for test storage... 00:16:15.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:15.292 15:24:28 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:15.292 15:24:28 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:15.292 15:24:28 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:15.292 15:24:28 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:15.292 15:24:28 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:15.292 15:24:28 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:15.292 15:24:28 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:15.292 15:24:28 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:15.292 15:24:28 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:15.292 15:24:28 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:15.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.293 15:24:28 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=77704 00:16:15.293 15:24:28 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:15.293 15:24:28 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:15.293 15:24:28 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 77704 00:16:15.293 15:24:28 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 77704 ']' 00:16:15.293 15:24:28 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.293 15:24:28 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:15.293 15:24:28 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.293 15:24:28 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:15.293 15:24:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:15.293 [2024-07-11 15:24:28.785645] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:15.293 [2024-07-11 15:24:28.785860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77704 ] 00:16:15.550 [2024-07-11 15:24:28.955257] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:15.550 [2024-07-11 15:24:29.120239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.550 [2024-07-11 15:24:29.120254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.484 15:24:29 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:16.484 15:24:29 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:16:16.484 15:24:29 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:16.484 15:24:29 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.484 15:24:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.484 [2024-07-11 15:24:29.803145] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:16.484 [2024-07-11 15:24:29.805475] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:16.484 15:24:29 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.484 15:24:29 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:16.484 15:24:29 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.484 15:24:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.484 malloc0 00:16:16.484 15:24:29 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.484 15:24:29 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:16.484 15:24:29 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.484 15:24:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:16.484 [2024-07-11 15:24:29.927264] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:16.484 [2024-07-11 15:24:29.927413] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:16.484 [2024-07-11 15:24:29.927429] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:16.484 [2024-07-11 15:24:29.927440] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:16.484 [2024-07-11 15:24:29.936279] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:16.484 [2024-07-11 15:24:29.936310] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:16.484 [2024-07-11 15:24:29.943128] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:16.484 [2024-07-11 15:24:29.943287] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:16.484 [2024-07-11 15:24:29.958114] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:16.484 1 00:16:16.484 15:24:29 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.484 15:24:29 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:17.419 15:24:30 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=77745 00:16:17.419 15:24:30 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:17.419 15:24:30 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:17.678 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:17.678 fio-3.35 00:16:17.678 Starting 1 process 00:16:22.950 15:24:35 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 77704 00:16:22.950 15:24:35 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:28.222 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 77704 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:28.222 15:24:40 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=77853 00:16:28.222 15:24:40 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:28.222 15:24:40 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:28.222 15:24:40 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 77853 00:16:28.222 15:24:40 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 77853 ']' 00:16:28.222 15:24:40 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.222 15:24:40 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:28.222 15:24:40 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.222 15:24:40 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:28.222 15:24:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.222 [2024-07-11 15:24:41.102853] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:28.222 [2024-07-11 15:24:41.103308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77853 ] 00:16:28.222 [2024-07-11 15:24:41.278651] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:28.222 [2024-07-11 15:24:41.488920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.222 [2024-07-11 15:24:41.488935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.789 15:24:42 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.789 15:24:42 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:16:28.789 15:24:42 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:28.789 15:24:42 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.789 15:24:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.789 [2024-07-11 15:24:42.152108] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:28.789 [2024-07-11 15:24:42.154593] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:28.789 15:24:42 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.789 15:24:42 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:28.789 15:24:42 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.789 15:24:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.789 malloc0 00:16:28.789 15:24:42 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.789 15:24:42 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:28.789 15:24:42 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.789 15:24:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:28.789 [2024-07-11 15:24:42.271731] ublk.c:2095:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:28.789 [2024-07-11 15:24:42.271797] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:28.789 [2024-07-11 15:24:42.271810] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:28.789 [2024-07-11 15:24:42.279205] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:28.789 [2024-07-11 15:24:42.279233] ublk.c:2024:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:28.789 1 00:16:28.789 [2024-07-11 15:24:42.279359] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:28.789 15:24:42 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.789 15:24:42 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 77745 00:16:55.327 [2024-07-11 15:25:06.077132] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:55.327 [2024-07-11 15:25:06.084072] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:55.327 [2024-07-11 15:25:06.091184] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:55.327 [2024-07-11 15:25:06.091248] ublk.c: 378:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:21.871 00:17:21.871 fio_test: (groupid=0, jobs=1): err= 0: pid=77748: Thu Jul 11 15:25:31 2024 00:17:21.871 read: IOPS=10.8k, BW=42.2MiB/s (44.3MB/s)(2534MiB/60003msec) 00:17:21.871 slat (nsec): min=1840, max=3028.5k, avg=6085.44, stdev=4759.43 00:17:21.871 clat (usec): min=776, max=30129k, avg=5584.95, stdev=284830.35 00:17:21.871 lat (usec): min=781, max=30129k, avg=5591.04, stdev=284830.35 00:17:21.871 clat percentiles (usec): 00:17:21.871 | 1.00th=[ 2278], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2573], 00:17:21.871 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2737], 00:17:21.871 | 70.00th=[ 2835], 80.00th=[ 2933], 90.00th=[ 3195], 95.00th=[ 4359], 00:17:21.871 | 99.00th=[ 6521], 99.50th=[ 7111], 99.90th=[ 8979], 99.95th=[12125], 00:17:21.871 | 99.99th=[14484] 00:17:21.871 bw ( KiB/s): min=10035, max=94536, per=100.00%, avg=85203.48, stdev=15385.47, samples=60 00:17:21.871 iops : min= 2508, max=23634, avg=21300.83, stdev=3846.42, samples=60 00:17:21.871 write: IOPS=10.8k, BW=42.2MiB/s (44.3MB/s)(2532MiB/60003msec); 0 zone resets 00:17:21.871 slat (nsec): min=1800, max=190367, avg=6154.97, stdev=2839.18 00:17:21.871 clat (usec): min=797, max=30130k, avg=6244.37, stdev=313027.30 00:17:21.871 lat (usec): min=802, max=30130k, avg=6250.52, stdev=313027.30 00:17:21.871 clat percentiles (msec): 00:17:21.871 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:17:21.871 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 3], 00:17:21.871 | 70.00th=[ 3], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:17:21.871 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 9], 99.95th=[ 13], 00:17:21.871 | 99.99th=[17113] 00:17:21.871 bw ( KiB/s): min= 9948, max=93824, per=100.00%, avg=85149.18, stdev=15347.62, samples=60 00:17:21.871 iops : min= 2487, max=23456, avg=21287.27, stdev=3836.93, samples=60 00:17:21.871 lat (usec) : 1000=0.01% 00:17:21.871 lat (msec) : 2=0.22%, 4=94.01%, 10=5.69%, 20=0.06%, >=2000=0.01% 00:17:21.871 cpu : usr=5.56%, sys=12.31%, ctx=40191, majf=0, minf=13 00:17:21.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:21.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:21.871 issued rwts: total=648765,648275,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:21.871 00:17:21.871 Run status group 0 (all jobs): 00:17:21.871 READ: bw=42.2MiB/s (44.3MB/s), 42.2MiB/s-42.2MiB/s (44.3MB/s-44.3MB/s), io=2534MiB (2657MB), run=60003-60003msec 00:17:21.871 WRITE: bw=42.2MiB/s (44.3MB/s), 42.2MiB/s-42.2MiB/s (44.3MB/s-44.3MB/s), io=2532MiB (2655MB), run=60003-60003msec 00:17:21.871 00:17:21.871 Disk stats (read/write): 00:17:21.871 ublkb1: ios=646474/646024, merge=0/0, ticks=3558753/3914528, in_queue=7473281, util=99.91% 00:17:21.871 15:25:31 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:21.871 15:25:31 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.871 15:25:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:21.871 [2024-07-11 15:25:31.232487] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:21.871 [2024-07-11 15:25:31.284108] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:21.871 [2024-07-11 15:25:31.284512] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:21.871 [2024-07-11 15:25:31.291178] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:21.871 [2024-07-11 15:25:31.291361] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:21.871 [2024-07-11 15:25:31.291388] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:21.871 15:25:31 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.871 15:25:31 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:21.871 15:25:31 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.871 15:25:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:21.871 [2024-07-11 15:25:31.307132] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:17:21.871 [2024-07-11 15:25:31.315097] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:17:21.871 [2024-07-11 15:25:31.315152] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:21.871 15:25:31 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.871 15:25:31 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:21.871 15:25:31 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:21.871 15:25:31 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 77853 00:17:21.871 15:25:31 ublk_recovery -- common/autotest_common.sh@948 -- # '[' -z 77853 ']' 00:17:21.871 15:25:31 ublk_recovery -- common/autotest_common.sh@952 -- # kill -0 77853 00:17:21.871 15:25:31 ublk_recovery -- common/autotest_common.sh@953 -- # uname 00:17:21.871 15:25:31 ublk_recovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:21.871 15:25:31 ublk_recovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77853 00:17:21.871 killing process with pid 77853 00:17:21.871 15:25:31 ublk_recovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:21.871 15:25:31 ublk_recovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:21.871 15:25:31 ublk_recovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77853' 00:17:21.871 15:25:31 ublk_recovery -- common/autotest_common.sh@967 -- # kill 77853 00:17:21.871 15:25:31 ublk_recovery -- common/autotest_common.sh@972 -- # wait 77853 00:17:21.871 [2024-07-11 15:25:32.205256] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:17:21.872 [2024-07-11 15:25:32.205316] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:17:21.872 ************************************ 00:17:21.872 END TEST ublk_recovery 00:17:21.872 ************************************ 00:17:21.872 00:17:21.872 real 1m4.736s 00:17:21.872 user 1m50.731s 00:17:21.872 sys 0m18.627s 00:17:21.872 15:25:33 ublk_recovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:21.872 15:25:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:21.872 15:25:33 -- common/autotest_common.sh@1142 -- # return 0 00:17:21.872 15:25:33 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:21.872 15:25:33 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:21.872 15:25:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:21.872 15:25:33 -- common/autotest_common.sh@10 -- # set +x 00:17:21.872 15:25:33 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:21.872 15:25:33 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:17:21.872 15:25:33 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:17:21.872 15:25:33 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:17:21.872 15:25:33 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:17:21.872 15:25:33 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:17:21.872 15:25:33 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:17:21.872 15:25:33 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:17:21.872 15:25:33 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:17:21.872 15:25:33 -- spdk/autotest.sh@339 -- # '[' 1 -eq 1 ']' 00:17:21.872 15:25:33 -- spdk/autotest.sh@340 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:21.872 15:25:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:21.872 15:25:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:21.872 15:25:33 -- common/autotest_common.sh@10 -- # set +x 00:17:21.872 ************************************ 00:17:21.872 START TEST ftl 00:17:21.872 ************************************ 00:17:21.872 15:25:33 ftl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:21.872 * Looking for test storage... 00:17:21.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:21.872 15:25:33 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:21.872 15:25:33 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:21.872 15:25:33 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:21.872 15:25:33 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:21.872 15:25:33 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:21.872 15:25:33 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:21.872 15:25:33 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:21.872 15:25:33 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:21.872 15:25:33 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:21.872 15:25:33 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:21.872 15:25:33 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:21.872 15:25:33 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:21.872 15:25:33 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:21.872 15:25:33 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:21.872 15:25:33 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:21.872 15:25:33 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:21.872 15:25:33 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:21.872 15:25:33 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:21.872 15:25:33 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:21.872 15:25:33 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:21.872 15:25:33 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:21.872 15:25:33 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:21.872 15:25:33 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:21.872 15:25:33 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:21.872 15:25:33 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:21.872 15:25:33 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:21.872 15:25:33 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:21.872 15:25:33 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:21.872 15:25:33 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:21.872 15:25:33 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:21.872 15:25:33 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:21.872 15:25:33 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:21.872 15:25:33 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:21.872 15:25:33 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:21.872 15:25:33 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:21.872 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:21.872 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:21.872 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:21.872 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:21.872 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:21.872 15:25:34 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:21.872 15:25:34 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=78628 00:17:21.872 15:25:34 ftl -- ftl/ftl.sh@38 -- # waitforlisten 78628 00:17:21.872 15:25:34 ftl -- common/autotest_common.sh@829 -- # '[' -z 78628 ']' 00:17:21.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.872 15:25:34 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.872 15:25:34 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.872 15:25:34 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.872 15:25:34 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.872 15:25:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:21.872 [2024-07-11 15:25:34.142244] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:21.872 [2024-07-11 15:25:34.142422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78628 ] 00:17:21.872 [2024-07-11 15:25:34.305082] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.872 [2024-07-11 15:25:34.531352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.872 15:25:35 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.872 15:25:35 ftl -- common/autotest_common.sh@862 -- # return 0 00:17:21.872 15:25:35 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:21.872 15:25:35 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:22.809 15:25:36 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:22.809 15:25:36 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:23.069 15:25:36 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:23.328 15:25:36 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:23.328 15:25:36 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:23.328 15:25:36 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:23.328 15:25:36 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:23.328 15:25:36 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:23.328 15:25:36 ftl -- ftl/ftl.sh@50 -- # break 00:17:23.328 15:25:36 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:23.328 15:25:36 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:23.328 15:25:36 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:23.328 15:25:36 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:23.587 15:25:37 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:23.587 15:25:37 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:23.587 15:25:37 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:23.587 15:25:37 ftl -- ftl/ftl.sh@63 -- # break 00:17:23.587 15:25:37 ftl -- ftl/ftl.sh@66 -- # killprocess 78628 00:17:23.587 15:25:37 ftl -- common/autotest_common.sh@948 -- # '[' -z 78628 ']' 00:17:23.587 15:25:37 ftl -- common/autotest_common.sh@952 -- # kill -0 78628 00:17:23.587 15:25:37 ftl -- common/autotest_common.sh@953 -- # uname 00:17:23.587 15:25:37 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:23.587 15:25:37 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78628 00:17:23.587 killing process with pid 78628 00:17:23.587 15:25:37 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:23.587 15:25:37 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:23.587 15:25:37 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78628' 00:17:23.587 15:25:37 ftl -- common/autotest_common.sh@967 -- # kill 78628 00:17:23.587 15:25:37 ftl -- common/autotest_common.sh@972 -- # wait 78628 00:17:25.492 15:25:38 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:25.492 15:25:38 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:25.492 15:25:38 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:25.492 15:25:38 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:25.492 15:25:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:25.492 ************************************ 00:17:25.492 START TEST ftl_fio_basic 00:17:25.492 ************************************ 00:17:25.492 15:25:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:25.492 * Looking for test storage... 00:17:25.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:25.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=78758 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 78758 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- common/autotest_common.sh@829 -- # '[' -z 78758 ']' 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:25.492 15:25:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:25.752 [2024-07-11 15:25:39.197190] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:25.752 [2024-07-11 15:25:39.197362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78758 ] 00:17:26.034 [2024-07-11 15:25:39.368319] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:26.034 [2024-07-11 15:25:39.531076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.034 [2024-07-11 15:25:39.531186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.034 [2024-07-11 15:25:39.531198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.602 15:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:26.602 15:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # return 0 00:17:26.602 15:25:40 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:26.602 15:25:40 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:26.602 15:25:40 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:26.602 15:25:40 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:26.602 15:25:40 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:26.602 15:25:40 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:27.169 15:25:40 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:27.169 15:25:40 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:27.169 15:25:40 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:27.169 15:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:27.169 15:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:27.169 15:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:27.169 15:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:27.169 15:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:27.169 15:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:27.169 { 00:17:27.169 "name": "nvme0n1", 00:17:27.169 "aliases": [ 00:17:27.169 "f1904b4d-e3bf-4a1d-8eff-9a9ec9731c9f" 00:17:27.169 ], 00:17:27.169 "product_name": "NVMe disk", 00:17:27.169 "block_size": 4096, 00:17:27.169 "num_blocks": 1310720, 00:17:27.169 "uuid": "f1904b4d-e3bf-4a1d-8eff-9a9ec9731c9f", 00:17:27.169 "assigned_rate_limits": { 00:17:27.169 "rw_ios_per_sec": 0, 00:17:27.169 "rw_mbytes_per_sec": 0, 00:17:27.169 "r_mbytes_per_sec": 0, 00:17:27.169 "w_mbytes_per_sec": 0 00:17:27.169 }, 00:17:27.169 "claimed": false, 00:17:27.169 "zoned": false, 00:17:27.169 "supported_io_types": { 00:17:27.169 "read": true, 00:17:27.169 "write": true, 00:17:27.169 "unmap": true, 00:17:27.169 "flush": true, 00:17:27.169 "reset": true, 00:17:27.169 "nvme_admin": true, 00:17:27.169 "nvme_io": true, 00:17:27.169 "nvme_io_md": false, 00:17:27.169 "write_zeroes": true, 00:17:27.169 "zcopy": false, 00:17:27.169 "get_zone_info": false, 00:17:27.169 "zone_management": false, 00:17:27.169 "zone_append": false, 00:17:27.169 "compare": true, 00:17:27.169 "compare_and_write": false, 00:17:27.169 "abort": true, 00:17:27.169 "seek_hole": false, 00:17:27.169 "seek_data": false, 00:17:27.169 "copy": true, 00:17:27.169 "nvme_iov_md": false 00:17:27.169 }, 00:17:27.169 "driver_specific": { 00:17:27.169 "nvme": [ 00:17:27.169 { 00:17:27.169 "pci_address": "0000:00:11.0", 00:17:27.169 "trid": { 00:17:27.169 "trtype": "PCIe", 00:17:27.169 "traddr": "0000:00:11.0" 00:17:27.169 }, 00:17:27.169 "ctrlr_data": { 00:17:27.169 "cntlid": 0, 00:17:27.169 "vendor_id": "0x1b36", 00:17:27.169 "model_number": "QEMU NVMe Ctrl", 00:17:27.169 "serial_number": "12341", 00:17:27.169 "firmware_revision": "8.0.0", 00:17:27.170 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:27.170 "oacs": { 00:17:27.170 "security": 0, 00:17:27.170 "format": 1, 00:17:27.170 "firmware": 0, 00:17:27.170 "ns_manage": 1 00:17:27.170 }, 00:17:27.170 "multi_ctrlr": false, 00:17:27.170 "ana_reporting": false 00:17:27.170 }, 00:17:27.170 "vs": { 00:17:27.170 "nvme_version": "1.4" 00:17:27.170 }, 00:17:27.170 "ns_data": { 00:17:27.170 "id": 1, 00:17:27.170 "can_share": false 00:17:27.170 } 00:17:27.170 } 00:17:27.170 ], 00:17:27.170 "mp_policy": "active_passive" 00:17:27.170 } 00:17:27.170 } 00:17:27.170 ]' 00:17:27.170 15:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:27.170 15:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:27.170 15:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:27.429 15:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:27.429 15:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:27.429 15:25:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:17:27.429 15:25:40 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:27.429 15:25:40 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:27.429 15:25:40 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:27.429 15:25:40 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:27.429 15:25:40 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:27.429 15:25:41 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:27.429 15:25:41 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:27.688 15:25:41 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=830740ef-3559-4272-987a-8e9c8b7afdb4 00:17:27.688 15:25:41 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 830740ef-3559-4272-987a-8e9c8b7afdb4 00:17:27.947 15:25:41 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=627c0771-c9e6-4000-a05c-615608253ef8 00:17:27.947 15:25:41 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 627c0771-c9e6-4000-a05c-615608253ef8 00:17:27.947 15:25:41 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:27.947 15:25:41 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:27.947 15:25:41 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=627c0771-c9e6-4000-a05c-615608253ef8 00:17:27.947 15:25:41 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:27.947 15:25:41 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 627c0771-c9e6-4000-a05c-615608253ef8 00:17:27.947 15:25:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=627c0771-c9e6-4000-a05c-615608253ef8 00:17:27.947 15:25:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:27.947 15:25:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:27.947 15:25:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:27.947 15:25:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 627c0771-c9e6-4000-a05c-615608253ef8 00:17:28.205 15:25:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:28.205 { 00:17:28.205 "name": "627c0771-c9e6-4000-a05c-615608253ef8", 00:17:28.205 "aliases": [ 00:17:28.205 "lvs/nvme0n1p0" 00:17:28.205 ], 00:17:28.205 "product_name": "Logical Volume", 00:17:28.205 "block_size": 4096, 00:17:28.205 "num_blocks": 26476544, 00:17:28.205 "uuid": "627c0771-c9e6-4000-a05c-615608253ef8", 00:17:28.205 "assigned_rate_limits": { 00:17:28.205 "rw_ios_per_sec": 0, 00:17:28.205 "rw_mbytes_per_sec": 0, 00:17:28.205 "r_mbytes_per_sec": 0, 00:17:28.205 "w_mbytes_per_sec": 0 00:17:28.205 }, 00:17:28.205 "claimed": false, 00:17:28.205 "zoned": false, 00:17:28.205 "supported_io_types": { 00:17:28.205 "read": true, 00:17:28.205 "write": true, 00:17:28.205 "unmap": true, 00:17:28.205 "flush": false, 00:17:28.205 "reset": true, 00:17:28.205 "nvme_admin": false, 00:17:28.205 "nvme_io": false, 00:17:28.205 "nvme_io_md": false, 00:17:28.205 "write_zeroes": true, 00:17:28.205 "zcopy": false, 00:17:28.205 "get_zone_info": false, 00:17:28.205 "zone_management": false, 00:17:28.205 "zone_append": false, 00:17:28.205 "compare": false, 00:17:28.205 "compare_and_write": false, 00:17:28.205 "abort": false, 00:17:28.205 "seek_hole": true, 00:17:28.205 "seek_data": true, 00:17:28.205 "copy": false, 00:17:28.205 "nvme_iov_md": false 00:17:28.205 }, 00:17:28.205 "driver_specific": { 00:17:28.205 "lvol": { 00:17:28.205 "lvol_store_uuid": "830740ef-3559-4272-987a-8e9c8b7afdb4", 00:17:28.205 "base_bdev": "nvme0n1", 00:17:28.205 "thin_provision": true, 00:17:28.205 "num_allocated_clusters": 0, 00:17:28.205 "snapshot": false, 00:17:28.205 "clone": false, 00:17:28.205 "esnap_clone": false 00:17:28.205 } 00:17:28.205 } 00:17:28.205 } 00:17:28.205 ]' 00:17:28.205 15:25:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:28.205 15:25:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:28.205 15:25:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:28.205 15:25:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:28.205 15:25:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:28.205 15:25:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:17:28.205 15:25:41 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:28.206 15:25:41 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:28.206 15:25:41 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:28.786 15:25:42 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:28.786 15:25:42 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:28.786 15:25:42 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 627c0771-c9e6-4000-a05c-615608253ef8 00:17:28.786 15:25:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=627c0771-c9e6-4000-a05c-615608253ef8 00:17:28.786 15:25:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:28.786 15:25:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:28.786 15:25:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:28.786 15:25:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 627c0771-c9e6-4000-a05c-615608253ef8 00:17:28.786 15:25:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:28.786 { 00:17:28.786 "name": "627c0771-c9e6-4000-a05c-615608253ef8", 00:17:28.786 "aliases": [ 00:17:28.786 "lvs/nvme0n1p0" 00:17:28.786 ], 00:17:28.786 "product_name": "Logical Volume", 00:17:28.786 "block_size": 4096, 00:17:28.786 "num_blocks": 26476544, 00:17:28.786 "uuid": "627c0771-c9e6-4000-a05c-615608253ef8", 00:17:28.786 "assigned_rate_limits": { 00:17:28.786 "rw_ios_per_sec": 0, 00:17:28.786 "rw_mbytes_per_sec": 0, 00:17:28.786 "r_mbytes_per_sec": 0, 00:17:28.786 "w_mbytes_per_sec": 0 00:17:28.786 }, 00:17:28.786 "claimed": false, 00:17:28.786 "zoned": false, 00:17:28.786 "supported_io_types": { 00:17:28.786 "read": true, 00:17:28.786 "write": true, 00:17:28.786 "unmap": true, 00:17:28.786 "flush": false, 00:17:28.786 "reset": true, 00:17:28.786 "nvme_admin": false, 00:17:28.786 "nvme_io": false, 00:17:28.786 "nvme_io_md": false, 00:17:28.786 "write_zeroes": true, 00:17:28.786 "zcopy": false, 00:17:28.786 "get_zone_info": false, 00:17:28.786 "zone_management": false, 00:17:28.786 "zone_append": false, 00:17:28.786 "compare": false, 00:17:28.786 "compare_and_write": false, 00:17:28.786 "abort": false, 00:17:28.786 "seek_hole": true, 00:17:28.786 "seek_data": true, 00:17:28.786 "copy": false, 00:17:28.786 "nvme_iov_md": false 00:17:28.786 }, 00:17:28.786 "driver_specific": { 00:17:28.786 "lvol": { 00:17:28.786 "lvol_store_uuid": "830740ef-3559-4272-987a-8e9c8b7afdb4", 00:17:28.786 "base_bdev": "nvme0n1", 00:17:28.786 "thin_provision": true, 00:17:28.786 "num_allocated_clusters": 0, 00:17:28.786 "snapshot": false, 00:17:28.786 "clone": false, 00:17:28.786 "esnap_clone": false 00:17:28.786 } 00:17:28.786 } 00:17:28.786 } 00:17:28.786 ]' 00:17:28.786 15:25:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:29.044 15:25:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:29.044 15:25:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:29.044 15:25:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:29.044 15:25:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:29.044 15:25:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:17:29.044 15:25:42 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:29.044 15:25:42 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:29.302 15:25:42 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:29.302 15:25:42 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:29.302 15:25:42 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:29.302 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:29.302 15:25:42 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 627c0771-c9e6-4000-a05c-615608253ef8 00:17:29.302 15:25:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=627c0771-c9e6-4000-a05c-615608253ef8 00:17:29.302 15:25:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:29.302 15:25:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:29.302 15:25:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:29.302 15:25:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 627c0771-c9e6-4000-a05c-615608253ef8 00:17:29.561 15:25:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:29.561 { 00:17:29.561 "name": "627c0771-c9e6-4000-a05c-615608253ef8", 00:17:29.561 "aliases": [ 00:17:29.561 "lvs/nvme0n1p0" 00:17:29.561 ], 00:17:29.561 "product_name": "Logical Volume", 00:17:29.561 "block_size": 4096, 00:17:29.561 "num_blocks": 26476544, 00:17:29.561 "uuid": "627c0771-c9e6-4000-a05c-615608253ef8", 00:17:29.561 "assigned_rate_limits": { 00:17:29.561 "rw_ios_per_sec": 0, 00:17:29.561 "rw_mbytes_per_sec": 0, 00:17:29.561 "r_mbytes_per_sec": 0, 00:17:29.561 "w_mbytes_per_sec": 0 00:17:29.561 }, 00:17:29.561 "claimed": false, 00:17:29.561 "zoned": false, 00:17:29.561 "supported_io_types": { 00:17:29.561 "read": true, 00:17:29.561 "write": true, 00:17:29.561 "unmap": true, 00:17:29.561 "flush": false, 00:17:29.561 "reset": true, 00:17:29.561 "nvme_admin": false, 00:17:29.561 "nvme_io": false, 00:17:29.561 "nvme_io_md": false, 00:17:29.561 "write_zeroes": true, 00:17:29.561 "zcopy": false, 00:17:29.561 "get_zone_info": false, 00:17:29.561 "zone_management": false, 00:17:29.561 "zone_append": false, 00:17:29.561 "compare": false, 00:17:29.561 "compare_and_write": false, 00:17:29.561 "abort": false, 00:17:29.561 "seek_hole": true, 00:17:29.561 "seek_data": true, 00:17:29.561 "copy": false, 00:17:29.561 "nvme_iov_md": false 00:17:29.561 }, 00:17:29.561 "driver_specific": { 00:17:29.561 "lvol": { 00:17:29.561 "lvol_store_uuid": "830740ef-3559-4272-987a-8e9c8b7afdb4", 00:17:29.561 "base_bdev": "nvme0n1", 00:17:29.561 "thin_provision": true, 00:17:29.561 "num_allocated_clusters": 0, 00:17:29.561 "snapshot": false, 00:17:29.561 "clone": false, 00:17:29.561 "esnap_clone": false 00:17:29.561 } 00:17:29.561 } 00:17:29.561 } 00:17:29.561 ]' 00:17:29.561 15:25:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:29.561 15:25:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:29.561 15:25:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:29.561 15:25:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:29.561 15:25:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:29.561 15:25:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:17:29.561 15:25:43 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:29.561 15:25:43 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:29.561 15:25:43 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 627c0771-c9e6-4000-a05c-615608253ef8 -c nvc0n1p0 --l2p_dram_limit 60 00:17:29.821 [2024-07-11 15:25:43.314355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.821 [2024-07-11 15:25:43.314426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:29.821 [2024-07-11 15:25:43.314462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:29.821 [2024-07-11 15:25:43.314477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.821 [2024-07-11 15:25:43.314564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.821 [2024-07-11 15:25:43.314586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:29.821 [2024-07-11 15:25:43.314599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:29.821 [2024-07-11 15:25:43.314612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.821 [2024-07-11 15:25:43.314647] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:29.821 [2024-07-11 15:25:43.315630] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:29.821 [2024-07-11 15:25:43.315664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.821 [2024-07-11 15:25:43.315700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:29.821 [2024-07-11 15:25:43.315713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.025 ms 00:17:29.821 [2024-07-11 15:25:43.315726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.821 [2024-07-11 15:25:43.315888] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID afb186e6-aa5d-4d02-9017-964808700496 00:17:29.821 [2024-07-11 15:25:43.316863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.821 [2024-07-11 15:25:43.316893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:29.821 [2024-07-11 15:25:43.316910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:29.821 [2024-07-11 15:25:43.316922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.821 [2024-07-11 15:25:43.321185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.821 [2024-07-11 15:25:43.321229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:29.821 [2024-07-11 15:25:43.321248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.192 ms 00:17:29.821 [2024-07-11 15:25:43.321261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.821 [2024-07-11 15:25:43.321384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.821 [2024-07-11 15:25:43.321402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:29.821 [2024-07-11 15:25:43.321417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:29.821 [2024-07-11 15:25:43.321428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.821 [2024-07-11 15:25:43.321530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.821 [2024-07-11 15:25:43.321552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:29.821 [2024-07-11 15:25:43.321577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:29.821 [2024-07-11 15:25:43.321588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.821 [2024-07-11 15:25:43.321627] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:29.821 [2024-07-11 15:25:43.325867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.821 [2024-07-11 15:25:43.325904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:29.821 [2024-07-11 15:25:43.325964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.250 ms 00:17:29.821 [2024-07-11 15:25:43.325978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.821 [2024-07-11 15:25:43.326028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.821 [2024-07-11 15:25:43.326077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:29.821 [2024-07-11 15:25:43.326091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:29.821 [2024-07-11 15:25:43.326104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.821 [2024-07-11 15:25:43.326172] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:29.821 [2024-07-11 15:25:43.326389] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:29.821 [2024-07-11 15:25:43.326419] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:29.821 [2024-07-11 15:25:43.326446] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:29.821 [2024-07-11 15:25:43.326480] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:29.821 [2024-07-11 15:25:43.326495] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:29.821 [2024-07-11 15:25:43.326508] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:29.821 [2024-07-11 15:25:43.326522] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:29.821 [2024-07-11 15:25:43.326534] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:29.821 [2024-07-11 15:25:43.326549] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:29.821 [2024-07-11 15:25:43.326561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.821 [2024-07-11 15:25:43.326574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:29.821 [2024-07-11 15:25:43.326586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:17:29.821 [2024-07-11 15:25:43.326599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.821 [2024-07-11 15:25:43.326723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.821 [2024-07-11 15:25:43.326739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:29.821 [2024-07-11 15:25:43.326751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:29.821 [2024-07-11 15:25:43.326763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.821 [2024-07-11 15:25:43.326883] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:29.821 [2024-07-11 15:25:43.326905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:29.821 [2024-07-11 15:25:43.326917] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:29.821 [2024-07-11 15:25:43.326930] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.821 [2024-07-11 15:25:43.326941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:29.821 [2024-07-11 15:25:43.326951] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:29.821 [2024-07-11 15:25:43.326961] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:29.821 [2024-07-11 15:25:43.326973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:29.821 [2024-07-11 15:25:43.326982] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:29.821 [2024-07-11 15:25:43.326993] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:29.821 [2024-07-11 15:25:43.327003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:29.821 [2024-07-11 15:25:43.327014] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:29.821 [2024-07-11 15:25:43.327023] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:29.821 [2024-07-11 15:25:43.327036] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:29.821 [2024-07-11 15:25:43.327046] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:29.821 [2024-07-11 15:25:43.327057] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.821 [2024-07-11 15:25:43.327067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:29.821 [2024-07-11 15:25:43.327094] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:29.821 [2024-07-11 15:25:43.327107] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.821 [2024-07-11 15:25:43.327121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:29.821 [2024-07-11 15:25:43.327133] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:29.821 [2024-07-11 15:25:43.327144] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.821 [2024-07-11 15:25:43.327154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:29.821 [2024-07-11 15:25:43.327165] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:29.821 [2024-07-11 15:25:43.327174] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.821 [2024-07-11 15:25:43.327186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:29.821 [2024-07-11 15:25:43.327195] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:29.821 [2024-07-11 15:25:43.327206] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.821 [2024-07-11 15:25:43.327216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:29.821 [2024-07-11 15:25:43.327227] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:29.821 [2024-07-11 15:25:43.327237] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.821 [2024-07-11 15:25:43.327248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:29.821 [2024-07-11 15:25:43.327257] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:29.821 [2024-07-11 15:25:43.327270] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:29.821 [2024-07-11 15:25:43.327279] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:29.821 [2024-07-11 15:25:43.327290] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:29.821 [2024-07-11 15:25:43.327300] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:29.821 [2024-07-11 15:25:43.327310] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:29.821 [2024-07-11 15:25:43.327320] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:29.821 [2024-07-11 15:25:43.327333] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.821 [2024-07-11 15:25:43.327342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:29.821 [2024-07-11 15:25:43.327353] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:29.821 [2024-07-11 15:25:43.327363] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.821 [2024-07-11 15:25:43.327373] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:29.821 [2024-07-11 15:25:43.327384] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:29.821 [2024-07-11 15:25:43.327431] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:29.821 [2024-07-11 15:25:43.327444] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.821 [2024-07-11 15:25:43.327457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:29.821 [2024-07-11 15:25:43.327467] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:29.821 [2024-07-11 15:25:43.327480] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:29.821 [2024-07-11 15:25:43.327490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:29.822 [2024-07-11 15:25:43.327502] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:29.822 [2024-07-11 15:25:43.327514] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:29.822 [2024-07-11 15:25:43.327534] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:29.822 [2024-07-11 15:25:43.327549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:29.822 [2024-07-11 15:25:43.327563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:29.822 [2024-07-11 15:25:43.327574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:29.822 [2024-07-11 15:25:43.327586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:29.822 [2024-07-11 15:25:43.327597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:29.822 [2024-07-11 15:25:43.327609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:29.822 [2024-07-11 15:25:43.327620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:29.822 [2024-07-11 15:25:43.327633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:29.822 [2024-07-11 15:25:43.327644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:29.822 [2024-07-11 15:25:43.327658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:29.822 [2024-07-11 15:25:43.327668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:29.822 [2024-07-11 15:25:43.327683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:29.822 [2024-07-11 15:25:43.327694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:29.822 [2024-07-11 15:25:43.327707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:29.822 [2024-07-11 15:25:43.327718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:29.822 [2024-07-11 15:25:43.327730] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:29.822 [2024-07-11 15:25:43.327745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:29.822 [2024-07-11 15:25:43.327758] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:29.822 [2024-07-11 15:25:43.327769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:29.822 [2024-07-11 15:25:43.327781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:29.822 [2024-07-11 15:25:43.327808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:29.822 [2024-07-11 15:25:43.327821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.822 [2024-07-11 15:25:43.327831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:29.822 [2024-07-11 15:25:43.327845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:17:29.822 [2024-07-11 15:25:43.327855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.822 [2024-07-11 15:25:43.327924] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:29.822 [2024-07-11 15:25:43.327940] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:32.349 [2024-07-11 15:25:45.884891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.349 [2024-07-11 15:25:45.885236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:32.349 [2024-07-11 15:25:45.885368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2556.971 ms 00:17:32.349 [2024-07-11 15:25:45.885502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.349 [2024-07-11 15:25:45.915067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.349 [2024-07-11 15:25:45.915360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:32.349 [2024-07-11 15:25:45.915489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.268 ms 00:17:32.349 [2024-07-11 15:25:45.915595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.349 [2024-07-11 15:25:45.915822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.349 [2024-07-11 15:25:45.915983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:32.349 [2024-07-11 15:25:45.916069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:32.349 [2024-07-11 15:25:45.916172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.349 [2024-07-11 15:25:45.963079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.349 [2024-07-11 15:25:45.963417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:32.349 [2024-07-11 15:25:45.963561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.707 ms 00:17:32.608 [2024-07-11 15:25:45.963616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.608 [2024-07-11 15:25:45.963801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.608 [2024-07-11 15:25:45.963910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:32.608 [2024-07-11 15:25:45.964072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:32.608 [2024-07-11 15:25:45.964129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.608 [2024-07-11 15:25:45.964702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.608 [2024-07-11 15:25:45.964854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:32.608 [2024-07-11 15:25:45.965032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:17:32.608 [2024-07-11 15:25:45.965144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.608 [2024-07-11 15:25:45.965381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.608 [2024-07-11 15:25:45.965434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:32.608 [2024-07-11 15:25:45.965578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:17:32.608 [2024-07-11 15:25:45.965693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.608 [2024-07-11 15:25:45.986495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.608 [2024-07-11 15:25:45.986554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:32.608 [2024-07-11 15:25:45.986593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.727 ms 00:17:32.608 [2024-07-11 15:25:45.986605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.608 [2024-07-11 15:25:45.999167] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:32.608 [2024-07-11 15:25:46.012224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.608 [2024-07-11 15:25:46.012312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:32.608 [2024-07-11 15:25:46.012335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.454 ms 00:17:32.608 [2024-07-11 15:25:46.012348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.608 [2024-07-11 15:25:46.062914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.608 [2024-07-11 15:25:46.062996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:32.608 [2024-07-11 15:25:46.063071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.506 ms 00:17:32.608 [2024-07-11 15:25:46.063091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.608 [2024-07-11 15:25:46.063397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.608 [2024-07-11 15:25:46.063443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:32.608 [2024-07-11 15:25:46.063458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:17:32.608 [2024-07-11 15:25:46.063475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.608 [2024-07-11 15:25:46.091804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.608 [2024-07-11 15:25:46.091880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:32.608 [2024-07-11 15:25:46.091900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.232 ms 00:17:32.608 [2024-07-11 15:25:46.091915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.608 [2024-07-11 15:25:46.119444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.608 [2024-07-11 15:25:46.119492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:32.608 [2024-07-11 15:25:46.119527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.477 ms 00:17:32.608 [2024-07-11 15:25:46.119540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.608 [2024-07-11 15:25:46.120242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.608 [2024-07-11 15:25:46.120286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:32.608 [2024-07-11 15:25:46.120300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:17:32.608 [2024-07-11 15:25:46.120314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.608 [2024-07-11 15:25:46.199917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.609 [2024-07-11 15:25:46.199999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:32.609 [2024-07-11 15:25:46.200024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.535 ms 00:17:32.609 [2024-07-11 15:25:46.200073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.868 [2024-07-11 15:25:46.231201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.868 [2024-07-11 15:25:46.231310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:32.868 [2024-07-11 15:25:46.231332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.071 ms 00:17:32.868 [2024-07-11 15:25:46.231346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.868 [2024-07-11 15:25:46.262220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.868 [2024-07-11 15:25:46.262301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:32.868 [2024-07-11 15:25:46.262335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.805 ms 00:17:32.868 [2024-07-11 15:25:46.262348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.868 [2024-07-11 15:25:46.292567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.868 [2024-07-11 15:25:46.292630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:32.868 [2024-07-11 15:25:46.292648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.166 ms 00:17:32.868 [2024-07-11 15:25:46.292661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.868 [2024-07-11 15:25:46.292732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.868 [2024-07-11 15:25:46.292756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:32.868 [2024-07-11 15:25:46.292772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:32.868 [2024-07-11 15:25:46.292787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.868 [2024-07-11 15:25:46.292912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.868 [2024-07-11 15:25:46.292935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:32.868 [2024-07-11 15:25:46.292949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:32.868 [2024-07-11 15:25:46.292962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.868 [2024-07-11 15:25:46.294141] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2979.235 ms, result 0 00:17:32.868 { 00:17:32.868 "name": "ftl0", 00:17:32.868 "uuid": "afb186e6-aa5d-4d02-9017-964808700496" 00:17:32.868 } 00:17:32.868 15:25:46 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:32.868 15:25:46 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:17:32.868 15:25:46 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:32.868 15:25:46 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local i 00:17:32.868 15:25:46 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:32.868 15:25:46 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:32.868 15:25:46 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:33.126 15:25:46 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:33.384 [ 00:17:33.384 { 00:17:33.384 "name": "ftl0", 00:17:33.384 "aliases": [ 00:17:33.384 "afb186e6-aa5d-4d02-9017-964808700496" 00:17:33.384 ], 00:17:33.384 "product_name": "FTL disk", 00:17:33.384 "block_size": 4096, 00:17:33.384 "num_blocks": 20971520, 00:17:33.384 "uuid": "afb186e6-aa5d-4d02-9017-964808700496", 00:17:33.384 "assigned_rate_limits": { 00:17:33.384 "rw_ios_per_sec": 0, 00:17:33.384 "rw_mbytes_per_sec": 0, 00:17:33.384 "r_mbytes_per_sec": 0, 00:17:33.384 "w_mbytes_per_sec": 0 00:17:33.384 }, 00:17:33.384 "claimed": false, 00:17:33.384 "zoned": false, 00:17:33.384 "supported_io_types": { 00:17:33.384 "read": true, 00:17:33.384 "write": true, 00:17:33.384 "unmap": true, 00:17:33.384 "flush": true, 00:17:33.384 "reset": false, 00:17:33.384 "nvme_admin": false, 00:17:33.384 "nvme_io": false, 00:17:33.384 "nvme_io_md": false, 00:17:33.384 "write_zeroes": true, 00:17:33.384 "zcopy": false, 00:17:33.384 "get_zone_info": false, 00:17:33.384 "zone_management": false, 00:17:33.384 "zone_append": false, 00:17:33.384 "compare": false, 00:17:33.384 "compare_and_write": false, 00:17:33.384 "abort": false, 00:17:33.384 "seek_hole": false, 00:17:33.384 "seek_data": false, 00:17:33.384 "copy": false, 00:17:33.384 "nvme_iov_md": false 00:17:33.384 }, 00:17:33.384 "driver_specific": { 00:17:33.384 "ftl": { 00:17:33.384 "base_bdev": "627c0771-c9e6-4000-a05c-615608253ef8", 00:17:33.384 "cache": "nvc0n1p0" 00:17:33.384 } 00:17:33.385 } 00:17:33.385 } 00:17:33.385 ] 00:17:33.385 15:25:46 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # return 0 00:17:33.385 15:25:46 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:33.385 15:25:46 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:33.643 15:25:47 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:33.643 15:25:47 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:33.643 [2024-07-11 15:25:47.255165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.643 [2024-07-11 15:25:47.255230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:33.643 [2024-07-11 15:25:47.255270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:33.643 [2024-07-11 15:25:47.255289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.643 [2024-07-11 15:25:47.255334] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:33.903 [2024-07-11 15:25:47.258941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.903 [2024-07-11 15:25:47.258986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:33.903 [2024-07-11 15:25:47.259040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.584 ms 00:17:33.903 [2024-07-11 15:25:47.259058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.903 [2024-07-11 15:25:47.259624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.903 [2024-07-11 15:25:47.259670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:33.903 [2024-07-11 15:25:47.259687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:17:33.903 [2024-07-11 15:25:47.259702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.903 [2024-07-11 15:25:47.263149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.903 [2024-07-11 15:25:47.263184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:33.903 [2024-07-11 15:25:47.263199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.408 ms 00:17:33.903 [2024-07-11 15:25:47.263211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.903 [2024-07-11 15:25:47.269178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.903 [2024-07-11 15:25:47.269211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:33.903 [2024-07-11 15:25:47.269241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.939 ms 00:17:33.903 [2024-07-11 15:25:47.269254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.903 [2024-07-11 15:25:47.298168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.903 [2024-07-11 15:25:47.298226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:33.903 [2024-07-11 15:25:47.298247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.817 ms 00:17:33.904 [2024-07-11 15:25:47.298275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.904 [2024-07-11 15:25:47.317027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.904 [2024-07-11 15:25:47.317115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:33.904 [2024-07-11 15:25:47.317139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.691 ms 00:17:33.904 [2024-07-11 15:25:47.317153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.904 [2024-07-11 15:25:47.317483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.904 [2024-07-11 15:25:47.317509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:33.904 [2024-07-11 15:25:47.317523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:17:33.904 [2024-07-11 15:25:47.317537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.904 [2024-07-11 15:25:47.348957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.904 [2024-07-11 15:25:47.349014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:33.904 [2024-07-11 15:25:47.349046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.388 ms 00:17:33.904 [2024-07-11 15:25:47.349077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.904 [2024-07-11 15:25:47.377777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.904 [2024-07-11 15:25:47.377846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:33.904 [2024-07-11 15:25:47.377863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.645 ms 00:17:33.904 [2024-07-11 15:25:47.377876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.904 [2024-07-11 15:25:47.405067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.904 [2024-07-11 15:25:47.405108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:33.904 [2024-07-11 15:25:47.405124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.141 ms 00:17:33.904 [2024-07-11 15:25:47.405137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.904 [2024-07-11 15:25:47.432605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.904 [2024-07-11 15:25:47.432659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:33.904 [2024-07-11 15:25:47.432675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.347 ms 00:17:33.904 [2024-07-11 15:25:47.432687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.904 [2024-07-11 15:25:47.432734] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:33.904 [2024-07-11 15:25:47.432759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.432773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.432786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.432796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.432808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.432819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.432831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.432842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.432857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.432867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.432895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.432923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.432935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.432947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.432959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.432971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.432984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.432995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:33.904 [2024-07-11 15:25:47.433738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.433749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.433763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.433774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.433786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.433798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.433810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.433821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.433834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.433845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.433858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.433869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.433881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.433892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.433907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.433917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.433930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.433988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.434005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.434017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.434044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.434056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.434070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.434081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.434094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.434106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:33.905 [2024-07-11 15:25:47.434129] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:33.905 [2024-07-11 15:25:47.434141] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: afb186e6-aa5d-4d02-9017-964808700496 00:17:33.905 [2024-07-11 15:25:47.434155] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:33.905 [2024-07-11 15:25:47.434166] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:33.905 [2024-07-11 15:25:47.434183] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:33.905 [2024-07-11 15:25:47.434194] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:33.905 [2024-07-11 15:25:47.434221] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:33.905 [2024-07-11 15:25:47.434232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:33.905 [2024-07-11 15:25:47.434244] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:33.905 [2024-07-11 15:25:47.434253] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:33.905 [2024-07-11 15:25:47.434264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:33.905 [2024-07-11 15:25:47.434290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.905 [2024-07-11 15:25:47.434302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:33.905 [2024-07-11 15:25:47.434313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.558 ms 00:17:33.905 [2024-07-11 15:25:47.434341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.905 [2024-07-11 15:25:47.449317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.905 [2024-07-11 15:25:47.449354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:33.905 [2024-07-11 15:25:47.449369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.911 ms 00:17:33.905 [2024-07-11 15:25:47.449381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.905 [2024-07-11 15:25:47.449809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.905 [2024-07-11 15:25:47.449835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:33.905 [2024-07-11 15:25:47.449848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:17:33.905 [2024-07-11 15:25:47.449861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.905 [2024-07-11 15:25:47.505876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.905 [2024-07-11 15:25:47.505978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:33.905 [2024-07-11 15:25:47.505999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.905 [2024-07-11 15:25:47.506012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.905 [2024-07-11 15:25:47.506108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.905 [2024-07-11 15:25:47.506128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:33.905 [2024-07-11 15:25:47.506141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.905 [2024-07-11 15:25:47.506154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.905 [2024-07-11 15:25:47.506306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.905 [2024-07-11 15:25:47.506333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:33.905 [2024-07-11 15:25:47.506345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.905 [2024-07-11 15:25:47.506357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.905 [2024-07-11 15:25:47.506400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.905 [2024-07-11 15:25:47.506419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:33.905 [2024-07-11 15:25:47.506431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.905 [2024-07-11 15:25:47.506443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.165 [2024-07-11 15:25:47.606747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.165 [2024-07-11 15:25:47.606842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:34.165 [2024-07-11 15:25:47.606859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.165 [2024-07-11 15:25:47.606872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.165 [2024-07-11 15:25:47.683206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.165 [2024-07-11 15:25:47.683289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:34.165 [2024-07-11 15:25:47.683307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.165 [2024-07-11 15:25:47.683320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.165 [2024-07-11 15:25:47.683431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.165 [2024-07-11 15:25:47.683453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:34.165 [2024-07-11 15:25:47.683467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.165 [2024-07-11 15:25:47.683479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.165 [2024-07-11 15:25:47.683553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.165 [2024-07-11 15:25:47.683575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:34.165 [2024-07-11 15:25:47.683587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.165 [2024-07-11 15:25:47.683598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.165 [2024-07-11 15:25:47.683726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.165 [2024-07-11 15:25:47.683750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:34.165 [2024-07-11 15:25:47.683764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.165 [2024-07-11 15:25:47.683777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.165 [2024-07-11 15:25:47.683842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.165 [2024-07-11 15:25:47.683863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:34.165 [2024-07-11 15:25:47.683875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.165 [2024-07-11 15:25:47.683887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.165 [2024-07-11 15:25:47.683939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.165 [2024-07-11 15:25:47.683957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:34.165 [2024-07-11 15:25:47.683969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.165 [2024-07-11 15:25:47.683984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.165 [2024-07-11 15:25:47.684080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.165 [2024-07-11 15:25:47.684105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:34.165 [2024-07-11 15:25:47.684118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.166 [2024-07-11 15:25:47.684130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.166 [2024-07-11 15:25:47.684304] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 429.123 ms, result 0 00:17:34.166 true 00:17:34.166 15:25:47 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 78758 00:17:34.166 15:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@948 -- # '[' -z 78758 ']' 00:17:34.166 15:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # kill -0 78758 00:17:34.166 15:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # uname 00:17:34.166 15:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:34.166 15:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78758 00:17:34.166 killing process with pid 78758 00:17:34.166 15:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:34.166 15:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:34.166 15:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78758' 00:17:34.166 15:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@967 -- # kill 78758 00:17:34.166 15:25:47 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # wait 78758 00:17:39.440 15:25:52 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:39.440 15:25:52 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:39.440 15:25:52 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:39.440 15:25:52 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:39.440 15:25:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:39.440 15:25:52 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:39.440 15:25:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:39.440 15:25:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:39.440 15:25:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:39.440 15:25:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:39.440 15:25:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:39.440 15:25:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:39.440 15:25:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:39.440 15:25:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:39.440 15:25:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:39.440 15:25:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:39.440 15:25:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:39.440 15:25:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:39.440 15:25:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:39.440 15:25:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:39.440 15:25:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:39.440 15:25:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:39.440 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:39.440 fio-3.35 00:17:39.440 Starting 1 thread 00:17:44.706 00:17:44.706 test: (groupid=0, jobs=1): err= 0: pid=78962: Thu Jul 11 15:25:57 2024 00:17:44.706 read: IOPS=922, BW=61.3MiB/s (64.3MB/s)(255MiB/4153msec) 00:17:44.706 slat (nsec): min=5222, max=89178, avg=7507.95, stdev=3874.30 00:17:44.706 clat (usec): min=334, max=713, avg=484.66, stdev=48.82 00:17:44.706 lat (usec): min=340, max=726, avg=492.17, stdev=49.73 00:17:44.706 clat percentiles (usec): 00:17:44.706 | 1.00th=[ 379], 5.00th=[ 420], 10.00th=[ 437], 20.00th=[ 449], 00:17:44.706 | 30.00th=[ 457], 40.00th=[ 465], 50.00th=[ 474], 60.00th=[ 486], 00:17:44.706 | 70.00th=[ 498], 80.00th=[ 523], 90.00th=[ 553], 95.00th=[ 578], 00:17:44.706 | 99.00th=[ 627], 99.50th=[ 644], 99.90th=[ 676], 99.95th=[ 693], 00:17:44.706 | 99.99th=[ 717] 00:17:44.706 write: IOPS=929, BW=61.7MiB/s (64.7MB/s)(256MiB/4148msec); 0 zone resets 00:17:44.706 slat (nsec): min=19100, max=86506, avg=24346.96, stdev=6297.44 00:17:44.706 clat (usec): min=395, max=1028, avg=549.27, stdev=61.22 00:17:44.706 lat (usec): min=418, max=1054, avg=573.61, stdev=61.66 00:17:44.706 clat percentiles (usec): 00:17:44.706 | 1.00th=[ 437], 5.00th=[ 465], 10.00th=[ 478], 20.00th=[ 502], 00:17:44.706 | 30.00th=[ 523], 40.00th=[ 537], 50.00th=[ 545], 60.00th=[ 553], 00:17:44.706 | 70.00th=[ 570], 80.00th=[ 586], 90.00th=[ 619], 95.00th=[ 644], 00:17:44.706 | 99.00th=[ 799], 99.50th=[ 857], 99.90th=[ 938], 99.95th=[ 979], 00:17:44.706 | 99.99th=[ 1029] 00:17:44.706 bw ( KiB/s): min=61608, max=63920, per=100.00%, avg=63223.00, stdev=721.25, samples=8 00:17:44.706 iops : min= 906, max= 940, avg=929.75, stdev=10.61, samples=8 00:17:44.706 lat (usec) : 500=44.30%, 750=54.94%, 1000=0.75% 00:17:44.706 lat (msec) : 2=0.01% 00:17:44.706 cpu : usr=99.18%, sys=0.14%, ctx=6, majf=0, minf=1171 00:17:44.706 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:44.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:44.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:44.706 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:44.706 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:44.706 00:17:44.706 Run status group 0 (all jobs): 00:17:44.706 READ: bw=61.3MiB/s (64.3MB/s), 61.3MiB/s-61.3MiB/s (64.3MB/s-64.3MB/s), io=255MiB (267MB), run=4153-4153msec 00:17:44.706 WRITE: bw=61.7MiB/s (64.7MB/s), 61.7MiB/s-61.7MiB/s (64.7MB/s-64.7MB/s), io=256MiB (269MB), run=4148-4148msec 00:17:45.642 ----------------------------------------------------- 00:17:45.642 Suppressions used: 00:17:45.642 count bytes template 00:17:45.642 1 5 /usr/src/fio/parse.c 00:17:45.642 1 8 libtcmalloc_minimal.so 00:17:45.642 1 904 libcrypto.so 00:17:45.642 ----------------------------------------------------- 00:17:45.642 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:45.642 15:25:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:45.642 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:45.642 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:45.642 fio-3.35 00:17:45.642 Starting 2 threads 00:18:17.721 00:18:17.721 first_half: (groupid=0, jobs=1): err= 0: pid=79059: Thu Jul 11 15:26:28 2024 00:18:17.721 read: IOPS=2345, BW=9381KiB/s (9606kB/s)(256MiB/27919msec) 00:18:17.721 slat (nsec): min=4378, max=51010, avg=7897.00, stdev=2893.80 00:18:17.721 clat (usec): min=794, max=296031, avg=46555.63, stdev=26722.19 00:18:17.721 lat (usec): min=798, max=296039, avg=46563.53, stdev=26722.42 00:18:17.721 clat percentiles (msec): 00:18:17.721 | 1.00th=[ 12], 5.00th=[ 37], 10.00th=[ 39], 20.00th=[ 40], 00:18:17.721 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 41], 60.00th=[ 42], 00:18:17.721 | 70.00th=[ 43], 80.00th=[ 46], 90.00th=[ 49], 95.00th=[ 87], 00:18:17.721 | 99.00th=[ 190], 99.50th=[ 205], 99.90th=[ 236], 99.95th=[ 257], 00:18:17.721 | 99.99th=[ 288] 00:18:17.721 write: IOPS=2351, BW=9407KiB/s (9632kB/s)(256MiB/27868msec); 0 zone resets 00:18:17.721 slat (usec): min=5, max=125, avg= 8.80, stdev= 5.15 00:18:17.721 clat (usec): min=465, max=52137, avg=7976.17, stdev=7914.39 00:18:17.721 lat (usec): min=484, max=52145, avg=7984.97, stdev=7914.55 00:18:17.721 clat percentiles (usec): 00:18:17.721 | 1.00th=[ 1139], 5.00th=[ 1532], 10.00th=[ 1860], 20.00th=[ 3359], 00:18:17.721 | 30.00th=[ 4228], 40.00th=[ 5473], 50.00th=[ 6128], 60.00th=[ 6980], 00:18:17.721 | 70.00th=[ 7570], 80.00th=[ 9110], 90.00th=[14877], 95.00th=[22676], 00:18:17.721 | 99.00th=[43254], 99.50th=[44827], 99.90th=[49546], 99.95th=[50594], 00:18:17.721 | 99.99th=[51643] 00:18:17.721 bw ( KiB/s): min= 416, max=47728, per=100.00%, avg=22641.13, stdev=13933.69, samples=23 00:18:17.721 iops : min= 104, max=11932, avg=5660.26, stdev=3483.41, samples=23 00:18:17.721 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.22% 00:18:17.721 lat (msec) : 2=5.40%, 4=8.11%, 10=27.46%, 20=7.51%, 50=46.98% 00:18:17.721 lat (msec) : 100=2.03%, 250=2.18%, 500=0.03% 00:18:17.721 cpu : usr=98.89%, sys=0.33%, ctx=38, majf=0, minf=5552 00:18:17.721 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:17.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.722 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:17.722 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:17.722 second_half: (groupid=0, jobs=1): err= 0: pid=79060: Thu Jul 11 15:26:28 2024 00:18:17.722 read: IOPS=2368, BW=9472KiB/s (9699kB/s)(256MiB/27656msec) 00:18:17.722 slat (nsec): min=4311, max=47501, avg=7819.34, stdev=2702.15 00:18:17.722 clat (msec): min=12, max=247, avg=46.94, stdev=23.65 00:18:17.722 lat (msec): min=12, max=247, avg=46.95, stdev=23.65 00:18:17.722 clat percentiles (msec): 00:18:17.722 | 1.00th=[ 35], 5.00th=[ 38], 10.00th=[ 39], 20.00th=[ 40], 00:18:17.722 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 41], 60.00th=[ 42], 00:18:17.722 | 70.00th=[ 43], 80.00th=[ 46], 90.00th=[ 50], 95.00th=[ 82], 00:18:17.722 | 99.00th=[ 178], 99.50th=[ 188], 99.90th=[ 211], 99.95th=[ 218], 00:18:17.722 | 99.99th=[ 241] 00:18:17.722 write: IOPS=2383, BW=9534KiB/s (9763kB/s)(256MiB/27496msec); 0 zone resets 00:18:17.722 slat (usec): min=5, max=615, avg= 8.73, stdev= 6.27 00:18:17.722 clat (usec): min=457, max=42884, avg=7080.05, stdev=4369.09 00:18:17.722 lat (usec): min=471, max=42891, avg=7088.78, stdev=4369.54 00:18:17.722 clat percentiles (usec): 00:18:17.722 | 1.00th=[ 1287], 5.00th=[ 2114], 10.00th=[ 3032], 20.00th=[ 4047], 00:18:17.722 | 30.00th=[ 5014], 40.00th=[ 5604], 50.00th=[ 6194], 60.00th=[ 6915], 00:18:17.722 | 70.00th=[ 7308], 80.00th=[ 8586], 90.00th=[13304], 95.00th=[15401], 00:18:17.722 | 99.00th=[22938], 99.50th=[31327], 99.90th=[38536], 99.95th=[41157], 00:18:17.722 | 99.99th=[42206] 00:18:17.722 bw ( KiB/s): min= 2048, max=46104, per=100.00%, avg=21845.33, stdev=13596.42, samples=24 00:18:17.722 iops : min= 512, max=11526, avg=5461.33, stdev=3399.10, samples=24 00:18:17.722 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.16% 00:18:17.722 lat (msec) : 2=2.01%, 4=7.47%, 10=32.04%, 20=7.68%, 50=45.98% 00:18:17.722 lat (msec) : 100=2.62%, 250=2.00% 00:18:17.722 cpu : usr=99.02%, sys=0.31%, ctx=51, majf=0, minf=5565 00:18:17.722 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:17.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.722 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:17.722 issued rwts: total=65490,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:17.722 00:18:17.722 Run status group 0 (all jobs): 00:18:17.722 READ: bw=18.3MiB/s (19.2MB/s), 9381KiB/s-9472KiB/s (9606kB/s-9699kB/s), io=512MiB (536MB), run=27656-27919msec 00:18:17.722 WRITE: bw=18.4MiB/s (19.3MB/s), 9407KiB/s-9534KiB/s (9632kB/s-9763kB/s), io=512MiB (537MB), run=27496-27868msec 00:18:17.722 ----------------------------------------------------- 00:18:17.722 Suppressions used: 00:18:17.722 count bytes template 00:18:17.722 2 10 /usr/src/fio/parse.c 00:18:17.722 2 192 /usr/src/fio/iolog.c 00:18:17.722 1 8 libtcmalloc_minimal.so 00:18:17.722 1 904 libcrypto.so 00:18:17.722 ----------------------------------------------------- 00:18:17.722 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:17.722 15:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:17.722 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:17.722 fio-3.35 00:18:17.722 Starting 1 thread 00:18:35.807 00:18:35.807 test: (groupid=0, jobs=1): err= 0: pid=79408: Thu Jul 11 15:26:47 2024 00:18:35.807 read: IOPS=6204, BW=24.2MiB/s (25.4MB/s)(255MiB/10509msec) 00:18:35.807 slat (nsec): min=4383, max=51632, avg=7051.23, stdev=2687.97 00:18:35.807 clat (usec): min=924, max=41145, avg=20619.20, stdev=1337.13 00:18:35.807 lat (usec): min=929, max=41153, avg=20626.25, stdev=1337.19 00:18:35.807 clat percentiles (usec): 00:18:35.807 | 1.00th=[19006], 5.00th=[19268], 10.00th=[19530], 20.00th=[19792], 00:18:35.807 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20317], 60.00th=[20579], 00:18:35.807 | 70.00th=[20841], 80.00th=[21103], 90.00th=[21365], 95.00th=[22938], 00:18:35.807 | 99.00th=[26346], 99.50th=[26608], 99.90th=[30802], 99.95th=[35914], 00:18:35.807 | 99.99th=[40109] 00:18:35.807 write: IOPS=11.8k, BW=46.2MiB/s (48.4MB/s)(256MiB/5541msec); 0 zone resets 00:18:35.808 slat (usec): min=5, max=411, avg=10.03, stdev= 6.15 00:18:35.808 clat (usec): min=654, max=71419, avg=10763.54, stdev=13684.87 00:18:35.808 lat (usec): min=663, max=71430, avg=10773.57, stdev=13684.92 00:18:35.808 clat percentiles (usec): 00:18:35.808 | 1.00th=[ 955], 5.00th=[ 1156], 10.00th=[ 1287], 20.00th=[ 1467], 00:18:35.808 | 30.00th=[ 1680], 40.00th=[ 2147], 50.00th=[ 7111], 60.00th=[ 7963], 00:18:35.808 | 70.00th=[ 9110], 80.00th=[10683], 90.00th=[40109], 95.00th=[42206], 00:18:35.808 | 99.00th=[46924], 99.50th=[47973], 99.90th=[54264], 99.95th=[58459], 00:18:35.808 | 99.99th=[65799] 00:18:35.808 bw ( KiB/s): min= 1632, max=66240, per=92.33%, avg=43682.08, stdev=16121.31, samples=12 00:18:35.808 iops : min= 408, max=16560, avg=10920.50, stdev=4030.32, samples=12 00:18:35.808 lat (usec) : 750=0.02%, 1000=0.77% 00:18:35.808 lat (msec) : 2=18.60%, 4=1.57%, 10=17.34%, 20=16.25%, 50=45.35% 00:18:35.808 lat (msec) : 100=0.10% 00:18:35.808 cpu : usr=98.51%, sys=0.67%, ctx=29, majf=0, minf=5568 00:18:35.808 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:35.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.808 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:35.808 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:35.808 00:18:35.808 Run status group 0 (all jobs): 00:18:35.808 READ: bw=24.2MiB/s (25.4MB/s), 24.2MiB/s-24.2MiB/s (25.4MB/s-25.4MB/s), io=255MiB (267MB), run=10509-10509msec 00:18:35.808 WRITE: bw=46.2MiB/s (48.4MB/s), 46.2MiB/s-46.2MiB/s (48.4MB/s-48.4MB/s), io=256MiB (268MB), run=5541-5541msec 00:18:35.808 ----------------------------------------------------- 00:18:35.808 Suppressions used: 00:18:35.808 count bytes template 00:18:35.808 1 5 /usr/src/fio/parse.c 00:18:35.808 2 192 /usr/src/fio/iolog.c 00:18:35.808 1 8 libtcmalloc_minimal.so 00:18:35.808 1 904 libcrypto.so 00:18:35.808 ----------------------------------------------------- 00:18:35.808 00:18:35.808 15:26:49 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:35.808 15:26:49 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:35.808 15:26:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:35.808 15:26:49 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:35.808 15:26:49 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:35.808 Remove shared memory files 00:18:35.808 15:26:49 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:35.808 15:26:49 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:35.808 15:26:49 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:35.808 15:26:49 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid62003 /dev/shm/spdk_tgt_trace.pid77704 00:18:35.808 15:26:49 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:35.808 15:26:49 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:36.067 00:18:36.067 real 1m10.451s 00:18:36.067 user 2m36.516s 00:18:36.067 sys 0m3.347s 00:18:36.067 15:26:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:36.067 15:26:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:36.067 ************************************ 00:18:36.067 END TEST ftl_fio_basic 00:18:36.067 ************************************ 00:18:36.067 15:26:49 ftl -- common/autotest_common.sh@1142 -- # return 0 00:18:36.067 15:26:49 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:36.067 15:26:49 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:36.067 15:26:49 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:36.067 15:26:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:36.067 ************************************ 00:18:36.067 START TEST ftl_bdevperf 00:18:36.067 ************************************ 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:36.067 * Looking for test storage... 00:18:36.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=79662 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 79662 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 79662 ']' 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:36.067 15:26:49 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:36.067 [2024-07-11 15:26:49.670877] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:36.067 [2024-07-11 15:26:49.671077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79662 ] 00:18:36.335 [2024-07-11 15:26:49.853564] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.609 [2024-07-11 15:26:50.081104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.176 15:26:50 ftl.ftl_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:37.176 15:26:50 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:18:37.176 15:26:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:37.176 15:26:50 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:37.176 15:26:50 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:37.176 15:26:50 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:37.176 15:26:50 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:37.176 15:26:50 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:37.434 15:26:50 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:37.434 15:26:50 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:37.434 15:26:50 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:37.434 15:26:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:37.434 15:26:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:37.434 15:26:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:37.434 15:26:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:37.434 15:26:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:37.692 15:26:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:37.692 { 00:18:37.692 "name": "nvme0n1", 00:18:37.692 "aliases": [ 00:18:37.692 "b52ce0ec-6595-430c-983a-818628370ae1" 00:18:37.692 ], 00:18:37.692 "product_name": "NVMe disk", 00:18:37.692 "block_size": 4096, 00:18:37.692 "num_blocks": 1310720, 00:18:37.692 "uuid": "b52ce0ec-6595-430c-983a-818628370ae1", 00:18:37.692 "assigned_rate_limits": { 00:18:37.692 "rw_ios_per_sec": 0, 00:18:37.692 "rw_mbytes_per_sec": 0, 00:18:37.692 "r_mbytes_per_sec": 0, 00:18:37.692 "w_mbytes_per_sec": 0 00:18:37.692 }, 00:18:37.692 "claimed": true, 00:18:37.692 "claim_type": "read_many_write_one", 00:18:37.692 "zoned": false, 00:18:37.692 "supported_io_types": { 00:18:37.692 "read": true, 00:18:37.692 "write": true, 00:18:37.692 "unmap": true, 00:18:37.692 "flush": true, 00:18:37.692 "reset": true, 00:18:37.692 "nvme_admin": true, 00:18:37.692 "nvme_io": true, 00:18:37.692 "nvme_io_md": false, 00:18:37.692 "write_zeroes": true, 00:18:37.692 "zcopy": false, 00:18:37.692 "get_zone_info": false, 00:18:37.692 "zone_management": false, 00:18:37.692 "zone_append": false, 00:18:37.692 "compare": true, 00:18:37.692 "compare_and_write": false, 00:18:37.692 "abort": true, 00:18:37.692 "seek_hole": false, 00:18:37.692 "seek_data": false, 00:18:37.692 "copy": true, 00:18:37.692 "nvme_iov_md": false 00:18:37.692 }, 00:18:37.692 "driver_specific": { 00:18:37.692 "nvme": [ 00:18:37.692 { 00:18:37.692 "pci_address": "0000:00:11.0", 00:18:37.692 "trid": { 00:18:37.693 "trtype": "PCIe", 00:18:37.693 "traddr": "0000:00:11.0" 00:18:37.693 }, 00:18:37.693 "ctrlr_data": { 00:18:37.693 "cntlid": 0, 00:18:37.693 "vendor_id": "0x1b36", 00:18:37.693 "model_number": "QEMU NVMe Ctrl", 00:18:37.693 "serial_number": "12341", 00:18:37.693 "firmware_revision": "8.0.0", 00:18:37.693 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:37.693 "oacs": { 00:18:37.693 "security": 0, 00:18:37.693 "format": 1, 00:18:37.693 "firmware": 0, 00:18:37.693 "ns_manage": 1 00:18:37.693 }, 00:18:37.693 "multi_ctrlr": false, 00:18:37.693 "ana_reporting": false 00:18:37.693 }, 00:18:37.693 "vs": { 00:18:37.693 "nvme_version": "1.4" 00:18:37.693 }, 00:18:37.693 "ns_data": { 00:18:37.693 "id": 1, 00:18:37.693 "can_share": false 00:18:37.693 } 00:18:37.693 } 00:18:37.693 ], 00:18:37.693 "mp_policy": "active_passive" 00:18:37.693 } 00:18:37.693 } 00:18:37.693 ]' 00:18:37.693 15:26:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:37.693 15:26:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:37.693 15:26:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:37.950 15:26:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:37.950 15:26:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:37.950 15:26:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:18:37.950 15:26:51 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:37.950 15:26:51 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:37.950 15:26:51 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:37.950 15:26:51 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:37.950 15:26:51 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:38.208 15:26:51 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=830740ef-3559-4272-987a-8e9c8b7afdb4 00:18:38.208 15:26:51 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:38.208 15:26:51 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 830740ef-3559-4272-987a-8e9c8b7afdb4 00:18:38.208 15:26:51 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:38.466 15:26:52 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=281b1b6a-e4d9-41cc-9626-9da871af9353 00:18:38.466 15:26:52 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 281b1b6a-e4d9-41cc-9626-9da871af9353 00:18:38.724 15:26:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=61095817-a7f2-4c34-acec-436478eecae3 00:18:38.724 15:26:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 61095817-a7f2-4c34-acec-436478eecae3 00:18:38.724 15:26:52 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:38.724 15:26:52 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:38.724 15:26:52 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=61095817-a7f2-4c34-acec-436478eecae3 00:18:38.724 15:26:52 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:38.724 15:26:52 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 61095817-a7f2-4c34-acec-436478eecae3 00:18:38.724 15:26:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=61095817-a7f2-4c34-acec-436478eecae3 00:18:38.724 15:26:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:38.724 15:26:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:38.724 15:26:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:38.724 15:26:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 61095817-a7f2-4c34-acec-436478eecae3 00:18:39.292 15:26:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:39.292 { 00:18:39.292 "name": "61095817-a7f2-4c34-acec-436478eecae3", 00:18:39.292 "aliases": [ 00:18:39.292 "lvs/nvme0n1p0" 00:18:39.292 ], 00:18:39.292 "product_name": "Logical Volume", 00:18:39.292 "block_size": 4096, 00:18:39.292 "num_blocks": 26476544, 00:18:39.292 "uuid": "61095817-a7f2-4c34-acec-436478eecae3", 00:18:39.292 "assigned_rate_limits": { 00:18:39.292 "rw_ios_per_sec": 0, 00:18:39.292 "rw_mbytes_per_sec": 0, 00:18:39.292 "r_mbytes_per_sec": 0, 00:18:39.292 "w_mbytes_per_sec": 0 00:18:39.292 }, 00:18:39.292 "claimed": false, 00:18:39.292 "zoned": false, 00:18:39.292 "supported_io_types": { 00:18:39.292 "read": true, 00:18:39.292 "write": true, 00:18:39.292 "unmap": true, 00:18:39.292 "flush": false, 00:18:39.292 "reset": true, 00:18:39.292 "nvme_admin": false, 00:18:39.292 "nvme_io": false, 00:18:39.292 "nvme_io_md": false, 00:18:39.292 "write_zeroes": true, 00:18:39.292 "zcopy": false, 00:18:39.292 "get_zone_info": false, 00:18:39.292 "zone_management": false, 00:18:39.292 "zone_append": false, 00:18:39.292 "compare": false, 00:18:39.292 "compare_and_write": false, 00:18:39.292 "abort": false, 00:18:39.292 "seek_hole": true, 00:18:39.292 "seek_data": true, 00:18:39.292 "copy": false, 00:18:39.292 "nvme_iov_md": false 00:18:39.292 }, 00:18:39.292 "driver_specific": { 00:18:39.292 "lvol": { 00:18:39.292 "lvol_store_uuid": "281b1b6a-e4d9-41cc-9626-9da871af9353", 00:18:39.292 "base_bdev": "nvme0n1", 00:18:39.292 "thin_provision": true, 00:18:39.292 "num_allocated_clusters": 0, 00:18:39.292 "snapshot": false, 00:18:39.292 "clone": false, 00:18:39.292 "esnap_clone": false 00:18:39.292 } 00:18:39.292 } 00:18:39.292 } 00:18:39.292 ]' 00:18:39.292 15:26:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:39.292 15:26:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:39.292 15:26:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:39.292 15:26:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:39.292 15:26:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:39.292 15:26:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:39.292 15:26:52 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:39.292 15:26:52 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:39.292 15:26:52 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:39.550 15:26:53 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:39.550 15:26:53 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:39.550 15:26:53 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 61095817-a7f2-4c34-acec-436478eecae3 00:18:39.550 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=61095817-a7f2-4c34-acec-436478eecae3 00:18:39.550 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:39.550 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:39.550 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:39.550 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 61095817-a7f2-4c34-acec-436478eecae3 00:18:39.807 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:39.807 { 00:18:39.807 "name": "61095817-a7f2-4c34-acec-436478eecae3", 00:18:39.807 "aliases": [ 00:18:39.807 "lvs/nvme0n1p0" 00:18:39.807 ], 00:18:39.807 "product_name": "Logical Volume", 00:18:39.807 "block_size": 4096, 00:18:39.807 "num_blocks": 26476544, 00:18:39.807 "uuid": "61095817-a7f2-4c34-acec-436478eecae3", 00:18:39.807 "assigned_rate_limits": { 00:18:39.807 "rw_ios_per_sec": 0, 00:18:39.807 "rw_mbytes_per_sec": 0, 00:18:39.807 "r_mbytes_per_sec": 0, 00:18:39.807 "w_mbytes_per_sec": 0 00:18:39.807 }, 00:18:39.807 "claimed": false, 00:18:39.807 "zoned": false, 00:18:39.807 "supported_io_types": { 00:18:39.807 "read": true, 00:18:39.807 "write": true, 00:18:39.807 "unmap": true, 00:18:39.807 "flush": false, 00:18:39.807 "reset": true, 00:18:39.807 "nvme_admin": false, 00:18:39.807 "nvme_io": false, 00:18:39.807 "nvme_io_md": false, 00:18:39.807 "write_zeroes": true, 00:18:39.807 "zcopy": false, 00:18:39.807 "get_zone_info": false, 00:18:39.807 "zone_management": false, 00:18:39.807 "zone_append": false, 00:18:39.807 "compare": false, 00:18:39.807 "compare_and_write": false, 00:18:39.807 "abort": false, 00:18:39.807 "seek_hole": true, 00:18:39.807 "seek_data": true, 00:18:39.807 "copy": false, 00:18:39.807 "nvme_iov_md": false 00:18:39.807 }, 00:18:39.807 "driver_specific": { 00:18:39.807 "lvol": { 00:18:39.807 "lvol_store_uuid": "281b1b6a-e4d9-41cc-9626-9da871af9353", 00:18:39.807 "base_bdev": "nvme0n1", 00:18:39.807 "thin_provision": true, 00:18:39.807 "num_allocated_clusters": 0, 00:18:39.807 "snapshot": false, 00:18:39.807 "clone": false, 00:18:39.807 "esnap_clone": false 00:18:39.807 } 00:18:39.807 } 00:18:39.807 } 00:18:39.807 ]' 00:18:39.807 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:39.807 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:39.807 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:39.807 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:39.807 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:39.807 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:39.807 15:26:53 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:39.807 15:26:53 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:40.065 15:26:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:18:40.065 15:26:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 61095817-a7f2-4c34-acec-436478eecae3 00:18:40.065 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=61095817-a7f2-4c34-acec-436478eecae3 00:18:40.065 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:40.065 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:40.065 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:40.065 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 61095817-a7f2-4c34-acec-436478eecae3 00:18:40.323 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:40.323 { 00:18:40.323 "name": "61095817-a7f2-4c34-acec-436478eecae3", 00:18:40.323 "aliases": [ 00:18:40.323 "lvs/nvme0n1p0" 00:18:40.323 ], 00:18:40.323 "product_name": "Logical Volume", 00:18:40.323 "block_size": 4096, 00:18:40.323 "num_blocks": 26476544, 00:18:40.323 "uuid": "61095817-a7f2-4c34-acec-436478eecae3", 00:18:40.323 "assigned_rate_limits": { 00:18:40.323 "rw_ios_per_sec": 0, 00:18:40.323 "rw_mbytes_per_sec": 0, 00:18:40.323 "r_mbytes_per_sec": 0, 00:18:40.323 "w_mbytes_per_sec": 0 00:18:40.323 }, 00:18:40.323 "claimed": false, 00:18:40.323 "zoned": false, 00:18:40.323 "supported_io_types": { 00:18:40.323 "read": true, 00:18:40.323 "write": true, 00:18:40.323 "unmap": true, 00:18:40.323 "flush": false, 00:18:40.323 "reset": true, 00:18:40.324 "nvme_admin": false, 00:18:40.324 "nvme_io": false, 00:18:40.324 "nvme_io_md": false, 00:18:40.324 "write_zeroes": true, 00:18:40.324 "zcopy": false, 00:18:40.324 "get_zone_info": false, 00:18:40.324 "zone_management": false, 00:18:40.324 "zone_append": false, 00:18:40.324 "compare": false, 00:18:40.324 "compare_and_write": false, 00:18:40.324 "abort": false, 00:18:40.324 "seek_hole": true, 00:18:40.324 "seek_data": true, 00:18:40.324 "copy": false, 00:18:40.324 "nvme_iov_md": false 00:18:40.324 }, 00:18:40.324 "driver_specific": { 00:18:40.324 "lvol": { 00:18:40.324 "lvol_store_uuid": "281b1b6a-e4d9-41cc-9626-9da871af9353", 00:18:40.324 "base_bdev": "nvme0n1", 00:18:40.324 "thin_provision": true, 00:18:40.324 "num_allocated_clusters": 0, 00:18:40.324 "snapshot": false, 00:18:40.324 "clone": false, 00:18:40.324 "esnap_clone": false 00:18:40.324 } 00:18:40.324 } 00:18:40.324 } 00:18:40.324 ]' 00:18:40.324 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:40.324 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:40.324 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:40.324 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:40.324 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:40.324 15:26:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:40.324 15:26:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:18:40.324 15:26:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 61095817-a7f2-4c34-acec-436478eecae3 -c nvc0n1p0 --l2p_dram_limit 20 00:18:40.583 [2024-07-11 15:26:54.099076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.583 [2024-07-11 15:26:54.099154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:40.583 [2024-07-11 15:26:54.099194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:40.583 [2024-07-11 15:26:54.099205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.583 [2024-07-11 15:26:54.099279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.583 [2024-07-11 15:26:54.099296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:40.583 [2024-07-11 15:26:54.099310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:40.583 [2024-07-11 15:26:54.099322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.583 [2024-07-11 15:26:54.099349] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:40.583 [2024-07-11 15:26:54.100301] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:40.583 [2024-07-11 15:26:54.100372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.583 [2024-07-11 15:26:54.100388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:40.583 [2024-07-11 15:26:54.100402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:18:40.583 [2024-07-11 15:26:54.100413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.583 [2024-07-11 15:26:54.100527] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 81426732-ca59-4209-8306-6b42195758ba 00:18:40.583 [2024-07-11 15:26:54.101652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.583 [2024-07-11 15:26:54.101705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:40.583 [2024-07-11 15:26:54.101720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:40.583 [2024-07-11 15:26:54.101736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.583 [2024-07-11 15:26:54.106558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.583 [2024-07-11 15:26:54.106633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:40.583 [2024-07-11 15:26:54.106649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.753 ms 00:18:40.583 [2024-07-11 15:26:54.106662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.583 [2024-07-11 15:26:54.106762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.583 [2024-07-11 15:26:54.106785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:40.583 [2024-07-11 15:26:54.106802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:18:40.583 [2024-07-11 15:26:54.106834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.583 [2024-07-11 15:26:54.106910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.583 [2024-07-11 15:26:54.106935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:40.583 [2024-07-11 15:26:54.106948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:40.583 [2024-07-11 15:26:54.106961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.583 [2024-07-11 15:26:54.106989] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:40.583 [2024-07-11 15:26:54.111159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.583 [2024-07-11 15:26:54.111212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:40.583 [2024-07-11 15:26:54.111232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.175 ms 00:18:40.583 [2024-07-11 15:26:54.111243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.583 [2024-07-11 15:26:54.111285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.583 [2024-07-11 15:26:54.111302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:40.583 [2024-07-11 15:26:54.111322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:40.583 [2024-07-11 15:26:54.111333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.583 [2024-07-11 15:26:54.111388] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:40.583 [2024-07-11 15:26:54.111552] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:40.583 [2024-07-11 15:26:54.111578] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:40.583 [2024-07-11 15:26:54.111592] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:40.583 [2024-07-11 15:26:54.111608] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:40.583 [2024-07-11 15:26:54.111622] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:40.583 [2024-07-11 15:26:54.111635] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:40.583 [2024-07-11 15:26:54.111646] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:40.583 [2024-07-11 15:26:54.111660] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:40.583 [2024-07-11 15:26:54.111670] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:40.583 [2024-07-11 15:26:54.111683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.583 [2024-07-11 15:26:54.111694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:40.583 [2024-07-11 15:26:54.111707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:18:40.583 [2024-07-11 15:26:54.111720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.583 [2024-07-11 15:26:54.111805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.583 [2024-07-11 15:26:54.111820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:40.583 [2024-07-11 15:26:54.111833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:40.583 [2024-07-11 15:26:54.111844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.583 [2024-07-11 15:26:54.111949] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:40.583 [2024-07-11 15:26:54.111966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:40.583 [2024-07-11 15:26:54.111980] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.583 [2024-07-11 15:26:54.111991] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.583 [2024-07-11 15:26:54.112007] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:40.583 [2024-07-11 15:26:54.112017] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:40.583 [2024-07-11 15:26:54.112029] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:40.583 [2024-07-11 15:26:54.112057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:40.583 [2024-07-11 15:26:54.112072] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:40.583 [2024-07-11 15:26:54.112083] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.583 [2024-07-11 15:26:54.112095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:40.583 [2024-07-11 15:26:54.112107] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:40.583 [2024-07-11 15:26:54.112119] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.583 [2024-07-11 15:26:54.112129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:40.583 [2024-07-11 15:26:54.112143] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:40.583 [2024-07-11 15:26:54.112154] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.583 [2024-07-11 15:26:54.112168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:40.583 [2024-07-11 15:26:54.112178] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:40.583 [2024-07-11 15:26:54.112202] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.583 [2024-07-11 15:26:54.112213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:40.584 [2024-07-11 15:26:54.112225] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:40.584 [2024-07-11 15:26:54.112235] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.584 [2024-07-11 15:26:54.112247] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:40.584 [2024-07-11 15:26:54.112256] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:40.584 [2024-07-11 15:26:54.112268] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.584 [2024-07-11 15:26:54.112278] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:40.584 [2024-07-11 15:26:54.112290] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:40.584 [2024-07-11 15:26:54.112300] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.584 [2024-07-11 15:26:54.112312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:40.584 [2024-07-11 15:26:54.112322] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:40.584 [2024-07-11 15:26:54.112334] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.584 [2024-07-11 15:26:54.112344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:40.584 [2024-07-11 15:26:54.112358] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:40.584 [2024-07-11 15:26:54.112368] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.584 [2024-07-11 15:26:54.112379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:40.584 [2024-07-11 15:26:54.112389] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:40.584 [2024-07-11 15:26:54.112401] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.584 [2024-07-11 15:26:54.112411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:40.584 [2024-07-11 15:26:54.112440] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:40.584 [2024-07-11 15:26:54.112451] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.584 [2024-07-11 15:26:54.112463] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:40.584 [2024-07-11 15:26:54.112473] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:40.584 [2024-07-11 15:26:54.112485] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.584 [2024-07-11 15:26:54.112495] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:40.584 [2024-07-11 15:26:54.112509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:40.584 [2024-07-11 15:26:54.112520] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.584 [2024-07-11 15:26:54.112532] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.584 [2024-07-11 15:26:54.112544] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:40.584 [2024-07-11 15:26:54.112558] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:40.584 [2024-07-11 15:26:54.112568] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:40.584 [2024-07-11 15:26:54.112581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:40.584 [2024-07-11 15:26:54.112591] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:40.584 [2024-07-11 15:26:54.112603] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:40.584 [2024-07-11 15:26:54.112618] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:40.584 [2024-07-11 15:26:54.112633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.584 [2024-07-11 15:26:54.112646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:40.584 [2024-07-11 15:26:54.112659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:40.584 [2024-07-11 15:26:54.112670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:40.584 [2024-07-11 15:26:54.112684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:40.584 [2024-07-11 15:26:54.112695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:40.584 [2024-07-11 15:26:54.112707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:40.584 [2024-07-11 15:26:54.112719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:40.584 [2024-07-11 15:26:54.112731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:40.584 [2024-07-11 15:26:54.112742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:40.584 [2024-07-11 15:26:54.112758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:40.584 [2024-07-11 15:26:54.112769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:40.584 [2024-07-11 15:26:54.112782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:40.584 [2024-07-11 15:26:54.112792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:40.584 [2024-07-11 15:26:54.112806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:40.584 [2024-07-11 15:26:54.112832] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:40.584 [2024-07-11 15:26:54.112846] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.584 [2024-07-11 15:26:54.112858] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:40.584 [2024-07-11 15:26:54.112870] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:40.584 [2024-07-11 15:26:54.112881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:40.584 [2024-07-11 15:26:54.112894] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:40.584 [2024-07-11 15:26:54.112906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.584 [2024-07-11 15:26:54.112919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:40.584 [2024-07-11 15:26:54.112933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.023 ms 00:18:40.584 [2024-07-11 15:26:54.112946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.584 [2024-07-11 15:26:54.112988] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:40.584 [2024-07-11 15:26:54.113010] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:43.117 [2024-07-11 15:26:56.105202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.117 [2024-07-11 15:26:56.105293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:43.117 [2024-07-11 15:26:56.105312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1992.224 ms 00:18:43.117 [2024-07-11 15:26:56.105329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.117 [2024-07-11 15:26:56.144189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.117 [2024-07-11 15:26:56.144269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:43.117 [2024-07-11 15:26:56.144293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.491 ms 00:18:43.117 [2024-07-11 15:26:56.144306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.117 [2024-07-11 15:26:56.144477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.117 [2024-07-11 15:26:56.144500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:43.117 [2024-07-11 15:26:56.144513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:43.117 [2024-07-11 15:26:56.144527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.117 [2024-07-11 15:26:56.177132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.117 [2024-07-11 15:26:56.177197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:43.117 [2024-07-11 15:26:56.177215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.557 ms 00:18:43.117 [2024-07-11 15:26:56.177228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.117 [2024-07-11 15:26:56.177281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.117 [2024-07-11 15:26:56.177303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:43.117 [2024-07-11 15:26:56.177315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:43.117 [2024-07-11 15:26:56.177327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.117 [2024-07-11 15:26:56.177733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.117 [2024-07-11 15:26:56.177759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:43.117 [2024-07-11 15:26:56.177773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:18:43.117 [2024-07-11 15:26:56.177801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.117 [2024-07-11 15:26:56.177934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.117 [2024-07-11 15:26:56.177954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:43.117 [2024-07-11 15:26:56.177966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:18:43.117 [2024-07-11 15:26:56.178012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.117 [2024-07-11 15:26:56.194921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.117 [2024-07-11 15:26:56.194977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:43.117 [2024-07-11 15:26:56.194994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.864 ms 00:18:43.117 [2024-07-11 15:26:56.195006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.117 [2024-07-11 15:26:56.207539] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:43.117 [2024-07-11 15:26:56.212234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.117 [2024-07-11 15:26:56.212267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:43.117 [2024-07-11 15:26:56.212285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.117 ms 00:18:43.117 [2024-07-11 15:26:56.212296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.117 [2024-07-11 15:26:56.266509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.117 [2024-07-11 15:26:56.266602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:43.117 [2024-07-11 15:26:56.266642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.176 ms 00:18:43.117 [2024-07-11 15:26:56.266654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.117 [2024-07-11 15:26:56.266905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.117 [2024-07-11 15:26:56.266933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:43.117 [2024-07-11 15:26:56.266952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:18:43.117 [2024-07-11 15:26:56.266963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.117 [2024-07-11 15:26:56.294864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.117 [2024-07-11 15:26:56.294919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:43.117 [2024-07-11 15:26:56.294955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.822 ms 00:18:43.117 [2024-07-11 15:26:56.294982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.117 [2024-07-11 15:26:56.321345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.117 [2024-07-11 15:26:56.321384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:43.117 [2024-07-11 15:26:56.321420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.308 ms 00:18:43.117 [2024-07-11 15:26:56.321431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.117 [2024-07-11 15:26:56.322110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.117 [2024-07-11 15:26:56.322135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:43.117 [2024-07-11 15:26:56.322152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:18:43.117 [2024-07-11 15:26:56.322164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.117 [2024-07-11 15:26:56.401232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.117 [2024-07-11 15:26:56.401301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:43.117 [2024-07-11 15:26:56.401344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.006 ms 00:18:43.117 [2024-07-11 15:26:56.401356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.117 [2024-07-11 15:26:56.429352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.117 [2024-07-11 15:26:56.429401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:43.117 [2024-07-11 15:26:56.429440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.946 ms 00:18:43.117 [2024-07-11 15:26:56.429451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.117 [2024-07-11 15:26:56.456610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.117 [2024-07-11 15:26:56.456670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:43.117 [2024-07-11 15:26:56.456709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.107 ms 00:18:43.117 [2024-07-11 15:26:56.456719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.117 [2024-07-11 15:26:56.482582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.117 [2024-07-11 15:26:56.482654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:43.117 [2024-07-11 15:26:56.482693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.800 ms 00:18:43.117 [2024-07-11 15:26:56.482704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.117 [2024-07-11 15:26:56.482761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.117 [2024-07-11 15:26:56.482778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:43.117 [2024-07-11 15:26:56.482797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:43.117 [2024-07-11 15:26:56.482808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.117 [2024-07-11 15:26:56.482898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.117 [2024-07-11 15:26:56.482915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:43.118 [2024-07-11 15:26:56.482929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:43.118 [2024-07-11 15:26:56.482939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.118 [2024-07-11 15:26:56.484567] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2384.878 ms, result 0 00:18:43.118 { 00:18:43.118 "name": "ftl0", 00:18:43.118 "uuid": "81426732-ca59-4209-8306-6b42195758ba" 00:18:43.118 } 00:18:43.118 15:26:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:43.118 15:26:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:18:43.118 15:26:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:18:43.376 15:26:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:43.376 [2024-07-11 15:26:56.840401] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:43.376 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:43.376 Zero copy mechanism will not be used. 00:18:43.376 Running I/O for 4 seconds... 00:18:47.565 00:18:47.565 Latency(us) 00:18:47.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.565 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:47.565 ftl0 : 4.00 1717.77 114.07 0.00 0.00 605.66 231.80 1563.93 00:18:47.565 =================================================================================================================== 00:18:47.565 Total : 1717.77 114.07 0.00 0.00 605.66 231.80 1563.93 00:18:47.565 0 00:18:47.565 [2024-07-11 15:27:00.851803] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:47.565 15:27:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:47.565 [2024-07-11 15:27:00.986003] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:47.565 Running I/O for 4 seconds... 00:18:51.757 00:18:51.757 Latency(us) 00:18:51.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.757 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:51.757 ftl0 : 4.02 7678.40 29.99 0.00 0.00 16620.07 294.17 35746.91 00:18:51.757 =================================================================================================================== 00:18:51.757 Total : 7678.40 29.99 0.00 0.00 16620.07 0.00 35746.91 00:18:51.757 0 00:18:51.757 [2024-07-11 15:27:05.017798] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:51.757 15:27:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:51.757 [2024-07-11 15:27:05.155487] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:51.757 Running I/O for 4 seconds... 00:18:55.950 00:18:55.950 Latency(us) 00:18:55.950 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.950 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:55.950 Verification LBA range: start 0x0 length 0x1400000 00:18:55.950 ftl0 : 4.01 6022.88 23.53 0.00 0.00 21172.35 361.19 31457.28 00:18:55.950 =================================================================================================================== 00:18:55.950 Total : 6022.88 23.53 0.00 0.00 21172.35 0.00 31457.28 00:18:55.950 [2024-07-11 15:27:09.186202] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:55.950 0 00:18:55.950 15:27:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:55.950 [2024-07-11 15:27:09.458416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.950 [2024-07-11 15:27:09.458498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:55.950 [2024-07-11 15:27:09.458522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:55.950 [2024-07-11 15:27:09.458534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.950 [2024-07-11 15:27:09.458571] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:55.950 [2024-07-11 15:27:09.461926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.950 [2024-07-11 15:27:09.461964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:55.950 [2024-07-11 15:27:09.462021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.333 ms 00:18:55.950 [2024-07-11 15:27:09.462053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.950 [2024-07-11 15:27:09.463854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.950 [2024-07-11 15:27:09.463943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:55.950 [2024-07-11 15:27:09.463961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.768 ms 00:18:55.950 [2024-07-11 15:27:09.463975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.230 [2024-07-11 15:27:09.642533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.230 [2024-07-11 15:27:09.642625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:56.230 [2024-07-11 15:27:09.642652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 178.533 ms 00:18:56.230 [2024-07-11 15:27:09.642671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.230 [2024-07-11 15:27:09.648896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.230 [2024-07-11 15:27:09.648934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:56.230 [2024-07-11 15:27:09.648965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.181 ms 00:18:56.230 [2024-07-11 15:27:09.648978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.230 [2024-07-11 15:27:09.678251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.230 [2024-07-11 15:27:09.678301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:56.230 [2024-07-11 15:27:09.678320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.160 ms 00:18:56.230 [2024-07-11 15:27:09.678349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.230 [2024-07-11 15:27:09.696302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.230 [2024-07-11 15:27:09.696347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:56.230 [2024-07-11 15:27:09.696381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.909 ms 00:18:56.230 [2024-07-11 15:27:09.696398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.230 [2024-07-11 15:27:09.696560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.230 [2024-07-11 15:27:09.696584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:56.230 [2024-07-11 15:27:09.696597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:18:56.230 [2024-07-11 15:27:09.696612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.230 [2024-07-11 15:27:09.726086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.230 [2024-07-11 15:27:09.726131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:56.231 [2024-07-11 15:27:09.726165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.453 ms 00:18:56.231 [2024-07-11 15:27:09.726179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.231 [2024-07-11 15:27:09.755358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.231 [2024-07-11 15:27:09.755401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:56.231 [2024-07-11 15:27:09.755449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.136 ms 00:18:56.231 [2024-07-11 15:27:09.755478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.231 [2024-07-11 15:27:09.784510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.231 [2024-07-11 15:27:09.784554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:56.231 [2024-07-11 15:27:09.784587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.990 ms 00:18:56.231 [2024-07-11 15:27:09.784600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.231 [2024-07-11 15:27:09.813389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.231 [2024-07-11 15:27:09.813445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:56.231 [2024-07-11 15:27:09.813478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.690 ms 00:18:56.231 [2024-07-11 15:27:09.813493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.231 [2024-07-11 15:27:09.813533] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:56.231 [2024-07-11 15:27:09.813558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.813970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:56.231 [2024-07-11 15:27:09.814832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:56.232 [2024-07-11 15:27:09.814843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:56.232 [2024-07-11 15:27:09.814857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:56.232 [2024-07-11 15:27:09.814868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:56.232 [2024-07-11 15:27:09.814882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:56.232 [2024-07-11 15:27:09.814893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:56.232 [2024-07-11 15:27:09.814906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:56.232 [2024-07-11 15:27:09.814918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:56.232 [2024-07-11 15:27:09.814931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:56.232 [2024-07-11 15:27:09.814943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:56.232 [2024-07-11 15:27:09.814956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:56.232 [2024-07-11 15:27:09.814982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:56.232 [2024-07-11 15:27:09.814997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:56.232 [2024-07-11 15:27:09.815008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:56.232 [2024-07-11 15:27:09.815023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:56.232 [2024-07-11 15:27:09.815050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:56.232 [2024-07-11 15:27:09.815063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:56.232 [2024-07-11 15:27:09.815075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:56.232 [2024-07-11 15:27:09.815097] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:56.232 [2024-07-11 15:27:09.815109] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 81426732-ca59-4209-8306-6b42195758ba 00:18:56.232 [2024-07-11 15:27:09.815133] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:56.232 [2024-07-11 15:27:09.815146] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:56.232 [2024-07-11 15:27:09.815159] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:56.232 [2024-07-11 15:27:09.815170] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:56.232 [2024-07-11 15:27:09.815186] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:56.232 [2024-07-11 15:27:09.815197] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:56.232 [2024-07-11 15:27:09.815210] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:56.232 [2024-07-11 15:27:09.815221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:56.232 [2024-07-11 15:27:09.815234] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:56.232 [2024-07-11 15:27:09.815246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.232 [2024-07-11 15:27:09.815259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:56.232 [2024-07-11 15:27:09.815271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.714 ms 00:18:56.232 [2024-07-11 15:27:09.815284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.232 [2024-07-11 15:27:09.831541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.232 [2024-07-11 15:27:09.831589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:56.232 [2024-07-11 15:27:09.831610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.198 ms 00:18:56.232 [2024-07-11 15:27:09.831624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.232 [2024-07-11 15:27:09.832097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.232 [2024-07-11 15:27:09.832125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:56.232 [2024-07-11 15:27:09.832141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:18:56.232 [2024-07-11 15:27:09.832155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.499 [2024-07-11 15:27:09.872207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.499 [2024-07-11 15:27:09.872264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:56.499 [2024-07-11 15:27:09.872298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.499 [2024-07-11 15:27:09.872313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.499 [2024-07-11 15:27:09.872383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.499 [2024-07-11 15:27:09.872400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:56.499 [2024-07-11 15:27:09.872412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.500 [2024-07-11 15:27:09.872424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.500 [2024-07-11 15:27:09.872538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.500 [2024-07-11 15:27:09.872561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:56.500 [2024-07-11 15:27:09.872574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.500 [2024-07-11 15:27:09.872589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.500 [2024-07-11 15:27:09.872611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.500 [2024-07-11 15:27:09.872628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:56.500 [2024-07-11 15:27:09.872640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.500 [2024-07-11 15:27:09.872652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.500 [2024-07-11 15:27:09.962418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.500 [2024-07-11 15:27:09.962487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:56.500 [2024-07-11 15:27:09.962525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.500 [2024-07-11 15:27:09.962541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.500 [2024-07-11 15:27:10.040502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.500 [2024-07-11 15:27:10.040590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:56.500 [2024-07-11 15:27:10.040610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.500 [2024-07-11 15:27:10.040624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.500 [2024-07-11 15:27:10.040724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.500 [2024-07-11 15:27:10.040746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:56.500 [2024-07-11 15:27:10.040758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.500 [2024-07-11 15:27:10.040770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.500 [2024-07-11 15:27:10.040826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.500 [2024-07-11 15:27:10.040845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:56.500 [2024-07-11 15:27:10.040857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.500 [2024-07-11 15:27:10.040869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.500 [2024-07-11 15:27:10.040976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.500 [2024-07-11 15:27:10.040997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:56.500 [2024-07-11 15:27:10.041010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.500 [2024-07-11 15:27:10.041024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.500 [2024-07-11 15:27:10.041134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.500 [2024-07-11 15:27:10.041156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:56.500 [2024-07-11 15:27:10.041169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.500 [2024-07-11 15:27:10.041182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.500 [2024-07-11 15:27:10.041225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.500 [2024-07-11 15:27:10.041243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:56.500 [2024-07-11 15:27:10.041255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.500 [2024-07-11 15:27:10.041268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.500 [2024-07-11 15:27:10.041322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:56.500 [2024-07-11 15:27:10.041341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:56.500 [2024-07-11 15:27:10.041353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:56.500 [2024-07-11 15:27:10.041366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.500 [2024-07-11 15:27:10.041553] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 583.072 ms, result 0 00:18:56.500 true 00:18:56.500 15:27:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 79662 00:18:56.500 15:27:10 ftl.ftl_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 79662 ']' 00:18:56.500 15:27:10 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # kill -0 79662 00:18:56.500 15:27:10 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # uname 00:18:56.500 15:27:10 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:56.500 15:27:10 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79662 00:18:56.500 killing process with pid 79662 00:18:56.500 Received shutdown signal, test time was about 4.000000 seconds 00:18:56.500 00:18:56.500 Latency(us) 00:18:56.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.500 =================================================================================================================== 00:18:56.500 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:56.500 15:27:10 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:56.500 15:27:10 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:56.500 15:27:10 ftl.ftl_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79662' 00:18:56.500 15:27:10 ftl.ftl_bdevperf -- common/autotest_common.sh@967 -- # kill 79662 00:18:56.500 15:27:10 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # wait 79662 00:18:57.877 15:27:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:18:57.877 15:27:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:18:57.877 15:27:11 ftl.ftl_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:57.877 15:27:11 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:57.877 Remove shared memory files 00:18:57.878 15:27:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:18:57.878 15:27:11 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:57.878 15:27:11 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:18:57.878 15:27:11 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:18:57.878 15:27:11 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:18:57.878 15:27:11 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:57.878 15:27:11 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:18:57.878 ************************************ 00:18:57.878 END TEST ftl_bdevperf 00:18:57.878 ************************************ 00:18:57.878 00:18:57.878 real 0m21.736s 00:18:57.878 user 0m25.196s 00:18:57.878 sys 0m1.013s 00:18:57.878 15:27:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:57.878 15:27:11 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:57.878 15:27:11 ftl -- common/autotest_common.sh@1142 -- # return 0 00:18:57.878 15:27:11 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:57.878 15:27:11 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:57.878 15:27:11 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:57.878 15:27:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:57.878 ************************************ 00:18:57.878 START TEST ftl_trim 00:18:57.878 ************************************ 00:18:57.878 15:27:11 ftl.ftl_trim -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:57.878 * Looking for test storage... 00:18:57.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:57.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=80001 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 80001 00:18:57.878 15:27:11 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 80001 ']' 00:18:57.878 15:27:11 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.878 15:27:11 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:57.878 15:27:11 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:57.878 15:27:11 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.878 15:27:11 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:57.878 15:27:11 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:57.878 [2024-07-11 15:27:11.487573] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:57.878 [2024-07-11 15:27:11.487749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80001 ] 00:18:58.137 [2024-07-11 15:27:11.659919] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:58.396 [2024-07-11 15:27:11.828888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.396 [2024-07-11 15:27:11.829075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.396 [2024-07-11 15:27:11.829097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.963 15:27:12 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:58.963 15:27:12 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:18:58.963 15:27:12 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:58.963 15:27:12 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:18:58.963 15:27:12 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:58.963 15:27:12 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:18:58.963 15:27:12 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:18:58.963 15:27:12 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:59.530 15:27:12 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:59.530 15:27:12 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:18:59.530 15:27:12 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:59.530 15:27:12 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:59.530 15:27:12 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:59.530 15:27:12 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:59.530 15:27:12 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:59.530 15:27:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:59.530 15:27:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:59.530 { 00:18:59.530 "name": "nvme0n1", 00:18:59.530 "aliases": [ 00:18:59.530 "fbc8e6bb-5f30-4da2-9ae3-26a4dc59980e" 00:18:59.530 ], 00:18:59.530 "product_name": "NVMe disk", 00:18:59.530 "block_size": 4096, 00:18:59.530 "num_blocks": 1310720, 00:18:59.530 "uuid": "fbc8e6bb-5f30-4da2-9ae3-26a4dc59980e", 00:18:59.530 "assigned_rate_limits": { 00:18:59.530 "rw_ios_per_sec": 0, 00:18:59.530 "rw_mbytes_per_sec": 0, 00:18:59.530 "r_mbytes_per_sec": 0, 00:18:59.530 "w_mbytes_per_sec": 0 00:18:59.530 }, 00:18:59.530 "claimed": true, 00:18:59.530 "claim_type": "read_many_write_one", 00:18:59.530 "zoned": false, 00:18:59.530 "supported_io_types": { 00:18:59.530 "read": true, 00:18:59.530 "write": true, 00:18:59.530 "unmap": true, 00:18:59.530 "flush": true, 00:18:59.530 "reset": true, 00:18:59.530 "nvme_admin": true, 00:18:59.530 "nvme_io": true, 00:18:59.530 "nvme_io_md": false, 00:18:59.530 "write_zeroes": true, 00:18:59.530 "zcopy": false, 00:18:59.530 "get_zone_info": false, 00:18:59.530 "zone_management": false, 00:18:59.530 "zone_append": false, 00:18:59.530 "compare": true, 00:18:59.530 "compare_and_write": false, 00:18:59.530 "abort": true, 00:18:59.530 "seek_hole": false, 00:18:59.530 "seek_data": false, 00:18:59.530 "copy": true, 00:18:59.530 "nvme_iov_md": false 00:18:59.530 }, 00:18:59.530 "driver_specific": { 00:18:59.530 "nvme": [ 00:18:59.530 { 00:18:59.530 "pci_address": "0000:00:11.0", 00:18:59.530 "trid": { 00:18:59.530 "trtype": "PCIe", 00:18:59.530 "traddr": "0000:00:11.0" 00:18:59.530 }, 00:18:59.530 "ctrlr_data": { 00:18:59.530 "cntlid": 0, 00:18:59.530 "vendor_id": "0x1b36", 00:18:59.530 "model_number": "QEMU NVMe Ctrl", 00:18:59.530 "serial_number": "12341", 00:18:59.530 "firmware_revision": "8.0.0", 00:18:59.530 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:59.530 "oacs": { 00:18:59.530 "security": 0, 00:18:59.530 "format": 1, 00:18:59.530 "firmware": 0, 00:18:59.530 "ns_manage": 1 00:18:59.530 }, 00:18:59.530 "multi_ctrlr": false, 00:18:59.530 "ana_reporting": false 00:18:59.530 }, 00:18:59.530 "vs": { 00:18:59.530 "nvme_version": "1.4" 00:18:59.530 }, 00:18:59.530 "ns_data": { 00:18:59.530 "id": 1, 00:18:59.530 "can_share": false 00:18:59.530 } 00:18:59.530 } 00:18:59.530 ], 00:18:59.530 "mp_policy": "active_passive" 00:18:59.530 } 00:18:59.530 } 00:18:59.530 ]' 00:18:59.530 15:27:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:59.530 15:27:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:59.530 15:27:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:59.788 15:27:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:59.788 15:27:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:59.788 15:27:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:18:59.788 15:27:13 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:18:59.788 15:27:13 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:59.788 15:27:13 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:18:59.788 15:27:13 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:59.788 15:27:13 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:00.047 15:27:13 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=281b1b6a-e4d9-41cc-9626-9da871af9353 00:19:00.047 15:27:13 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:00.047 15:27:13 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 281b1b6a-e4d9-41cc-9626-9da871af9353 00:19:00.305 15:27:13 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:00.563 15:27:13 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=b16aea30-0e83-45be-a8bc-8cc9cf3452e3 00:19:00.563 15:27:13 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b16aea30-0e83-45be-a8bc-8cc9cf3452e3 00:19:00.821 15:27:14 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=920918b6-1273-4679-acd5-56f7c95cc725 00:19:00.821 15:27:14 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 920918b6-1273-4679-acd5-56f7c95cc725 00:19:00.821 15:27:14 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:00.821 15:27:14 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:00.821 15:27:14 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=920918b6-1273-4679-acd5-56f7c95cc725 00:19:00.821 15:27:14 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:00.821 15:27:14 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 920918b6-1273-4679-acd5-56f7c95cc725 00:19:00.821 15:27:14 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=920918b6-1273-4679-acd5-56f7c95cc725 00:19:00.822 15:27:14 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:00.822 15:27:14 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:19:00.822 15:27:14 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:19:00.822 15:27:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 920918b6-1273-4679-acd5-56f7c95cc725 00:19:01.080 15:27:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:01.080 { 00:19:01.080 "name": "920918b6-1273-4679-acd5-56f7c95cc725", 00:19:01.080 "aliases": [ 00:19:01.080 "lvs/nvme0n1p0" 00:19:01.080 ], 00:19:01.080 "product_name": "Logical Volume", 00:19:01.080 "block_size": 4096, 00:19:01.080 "num_blocks": 26476544, 00:19:01.080 "uuid": "920918b6-1273-4679-acd5-56f7c95cc725", 00:19:01.080 "assigned_rate_limits": { 00:19:01.080 "rw_ios_per_sec": 0, 00:19:01.080 "rw_mbytes_per_sec": 0, 00:19:01.080 "r_mbytes_per_sec": 0, 00:19:01.080 "w_mbytes_per_sec": 0 00:19:01.080 }, 00:19:01.080 "claimed": false, 00:19:01.080 "zoned": false, 00:19:01.080 "supported_io_types": { 00:19:01.080 "read": true, 00:19:01.080 "write": true, 00:19:01.080 "unmap": true, 00:19:01.080 "flush": false, 00:19:01.080 "reset": true, 00:19:01.080 "nvme_admin": false, 00:19:01.080 "nvme_io": false, 00:19:01.080 "nvme_io_md": false, 00:19:01.080 "write_zeroes": true, 00:19:01.080 "zcopy": false, 00:19:01.080 "get_zone_info": false, 00:19:01.080 "zone_management": false, 00:19:01.080 "zone_append": false, 00:19:01.080 "compare": false, 00:19:01.080 "compare_and_write": false, 00:19:01.080 "abort": false, 00:19:01.080 "seek_hole": true, 00:19:01.080 "seek_data": true, 00:19:01.080 "copy": false, 00:19:01.080 "nvme_iov_md": false 00:19:01.080 }, 00:19:01.080 "driver_specific": { 00:19:01.080 "lvol": { 00:19:01.080 "lvol_store_uuid": "b16aea30-0e83-45be-a8bc-8cc9cf3452e3", 00:19:01.080 "base_bdev": "nvme0n1", 00:19:01.080 "thin_provision": true, 00:19:01.080 "num_allocated_clusters": 0, 00:19:01.080 "snapshot": false, 00:19:01.080 "clone": false, 00:19:01.080 "esnap_clone": false 00:19:01.080 } 00:19:01.080 } 00:19:01.080 } 00:19:01.080 ]' 00:19:01.080 15:27:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:01.080 15:27:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:19:01.080 15:27:14 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:01.080 15:27:14 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:01.080 15:27:14 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:01.080 15:27:14 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:19:01.080 15:27:14 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:01.080 15:27:14 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:01.080 15:27:14 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:01.339 15:27:14 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:01.339 15:27:14 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:01.339 15:27:14 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 920918b6-1273-4679-acd5-56f7c95cc725 00:19:01.339 15:27:14 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=920918b6-1273-4679-acd5-56f7c95cc725 00:19:01.339 15:27:14 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:01.339 15:27:14 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:19:01.339 15:27:14 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:19:01.339 15:27:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 920918b6-1273-4679-acd5-56f7c95cc725 00:19:01.597 15:27:15 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:01.597 { 00:19:01.597 "name": "920918b6-1273-4679-acd5-56f7c95cc725", 00:19:01.597 "aliases": [ 00:19:01.597 "lvs/nvme0n1p0" 00:19:01.597 ], 00:19:01.597 "product_name": "Logical Volume", 00:19:01.597 "block_size": 4096, 00:19:01.597 "num_blocks": 26476544, 00:19:01.597 "uuid": "920918b6-1273-4679-acd5-56f7c95cc725", 00:19:01.597 "assigned_rate_limits": { 00:19:01.597 "rw_ios_per_sec": 0, 00:19:01.597 "rw_mbytes_per_sec": 0, 00:19:01.597 "r_mbytes_per_sec": 0, 00:19:01.597 "w_mbytes_per_sec": 0 00:19:01.597 }, 00:19:01.597 "claimed": false, 00:19:01.597 "zoned": false, 00:19:01.597 "supported_io_types": { 00:19:01.597 "read": true, 00:19:01.597 "write": true, 00:19:01.597 "unmap": true, 00:19:01.597 "flush": false, 00:19:01.597 "reset": true, 00:19:01.597 "nvme_admin": false, 00:19:01.597 "nvme_io": false, 00:19:01.597 "nvme_io_md": false, 00:19:01.597 "write_zeroes": true, 00:19:01.597 "zcopy": false, 00:19:01.597 "get_zone_info": false, 00:19:01.597 "zone_management": false, 00:19:01.597 "zone_append": false, 00:19:01.597 "compare": false, 00:19:01.597 "compare_and_write": false, 00:19:01.597 "abort": false, 00:19:01.597 "seek_hole": true, 00:19:01.597 "seek_data": true, 00:19:01.597 "copy": false, 00:19:01.597 "nvme_iov_md": false 00:19:01.597 }, 00:19:01.597 "driver_specific": { 00:19:01.597 "lvol": { 00:19:01.597 "lvol_store_uuid": "b16aea30-0e83-45be-a8bc-8cc9cf3452e3", 00:19:01.597 "base_bdev": "nvme0n1", 00:19:01.597 "thin_provision": true, 00:19:01.597 "num_allocated_clusters": 0, 00:19:01.597 "snapshot": false, 00:19:01.597 "clone": false, 00:19:01.597 "esnap_clone": false 00:19:01.597 } 00:19:01.597 } 00:19:01.597 } 00:19:01.597 ]' 00:19:01.597 15:27:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:01.856 15:27:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:19:01.856 15:27:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:01.856 15:27:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:01.856 15:27:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:01.856 15:27:15 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:19:01.856 15:27:15 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:01.856 15:27:15 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:02.115 15:27:15 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:02.115 15:27:15 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:02.115 15:27:15 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 920918b6-1273-4679-acd5-56f7c95cc725 00:19:02.115 15:27:15 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=920918b6-1273-4679-acd5-56f7c95cc725 00:19:02.115 15:27:15 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:02.115 15:27:15 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:19:02.115 15:27:15 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:19:02.115 15:27:15 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 920918b6-1273-4679-acd5-56f7c95cc725 00:19:02.374 15:27:15 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:02.374 { 00:19:02.374 "name": "920918b6-1273-4679-acd5-56f7c95cc725", 00:19:02.374 "aliases": [ 00:19:02.374 "lvs/nvme0n1p0" 00:19:02.374 ], 00:19:02.374 "product_name": "Logical Volume", 00:19:02.374 "block_size": 4096, 00:19:02.374 "num_blocks": 26476544, 00:19:02.374 "uuid": "920918b6-1273-4679-acd5-56f7c95cc725", 00:19:02.374 "assigned_rate_limits": { 00:19:02.374 "rw_ios_per_sec": 0, 00:19:02.374 "rw_mbytes_per_sec": 0, 00:19:02.374 "r_mbytes_per_sec": 0, 00:19:02.374 "w_mbytes_per_sec": 0 00:19:02.374 }, 00:19:02.374 "claimed": false, 00:19:02.374 "zoned": false, 00:19:02.374 "supported_io_types": { 00:19:02.374 "read": true, 00:19:02.374 "write": true, 00:19:02.374 "unmap": true, 00:19:02.374 "flush": false, 00:19:02.374 "reset": true, 00:19:02.374 "nvme_admin": false, 00:19:02.374 "nvme_io": false, 00:19:02.374 "nvme_io_md": false, 00:19:02.374 "write_zeroes": true, 00:19:02.374 "zcopy": false, 00:19:02.374 "get_zone_info": false, 00:19:02.374 "zone_management": false, 00:19:02.374 "zone_append": false, 00:19:02.374 "compare": false, 00:19:02.374 "compare_and_write": false, 00:19:02.374 "abort": false, 00:19:02.374 "seek_hole": true, 00:19:02.374 "seek_data": true, 00:19:02.374 "copy": false, 00:19:02.374 "nvme_iov_md": false 00:19:02.374 }, 00:19:02.374 "driver_specific": { 00:19:02.374 "lvol": { 00:19:02.374 "lvol_store_uuid": "b16aea30-0e83-45be-a8bc-8cc9cf3452e3", 00:19:02.374 "base_bdev": "nvme0n1", 00:19:02.374 "thin_provision": true, 00:19:02.374 "num_allocated_clusters": 0, 00:19:02.374 "snapshot": false, 00:19:02.374 "clone": false, 00:19:02.374 "esnap_clone": false 00:19:02.374 } 00:19:02.374 } 00:19:02.374 } 00:19:02.374 ]' 00:19:02.374 15:27:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:02.374 15:27:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:19:02.375 15:27:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:02.375 15:27:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:02.375 15:27:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:02.375 15:27:15 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:19:02.375 15:27:15 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:02.375 15:27:15 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 920918b6-1273-4679-acd5-56f7c95cc725 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:02.635 [2024-07-11 15:27:16.083446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.635 [2024-07-11 15:27:16.083524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:02.635 [2024-07-11 15:27:16.083544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:02.635 [2024-07-11 15:27:16.083561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.635 [2024-07-11 15:27:16.086904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.635 [2024-07-11 15:27:16.086965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:02.635 [2024-07-11 15:27:16.086999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.306 ms 00:19:02.635 [2024-07-11 15:27:16.087013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.635 [2024-07-11 15:27:16.087206] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:02.635 [2024-07-11 15:27:16.088208] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:02.635 [2024-07-11 15:27:16.088249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.635 [2024-07-11 15:27:16.088269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:02.635 [2024-07-11 15:27:16.088284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.071 ms 00:19:02.635 [2024-07-11 15:27:16.088298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.635 [2024-07-11 15:27:16.088512] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f0bc7114-fa44-4b7b-b47f-5e2eeb48ec58 00:19:02.635 [2024-07-11 15:27:16.089562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.635 [2024-07-11 15:27:16.089616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:02.635 [2024-07-11 15:27:16.089653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:02.635 [2024-07-11 15:27:16.089665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.635 [2024-07-11 15:27:16.093928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.635 [2024-07-11 15:27:16.093999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:02.635 [2024-07-11 15:27:16.094369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.165 ms 00:19:02.635 [2024-07-11 15:27:16.094401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.635 [2024-07-11 15:27:16.095145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.635 [2024-07-11 15:27:16.095186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:02.635 [2024-07-11 15:27:16.095207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:19:02.635 [2024-07-11 15:27:16.095221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.635 [2024-07-11 15:27:16.095289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.635 [2024-07-11 15:27:16.095308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:02.635 [2024-07-11 15:27:16.095328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:02.635 [2024-07-11 15:27:16.095340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.635 [2024-07-11 15:27:16.095386] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:02.635 [2024-07-11 15:27:16.099930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.635 [2024-07-11 15:27:16.099975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:02.635 [2024-07-11 15:27:16.099992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.556 ms 00:19:02.635 [2024-07-11 15:27:16.100006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.635 [2024-07-11 15:27:16.100117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.635 [2024-07-11 15:27:16.100143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:02.635 [2024-07-11 15:27:16.100158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:02.635 [2024-07-11 15:27:16.100171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.635 [2024-07-11 15:27:16.100213] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:02.635 [2024-07-11 15:27:16.100376] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:02.635 [2024-07-11 15:27:16.100402] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:02.635 [2024-07-11 15:27:16.100424] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:02.635 [2024-07-11 15:27:16.100440] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:02.635 [2024-07-11 15:27:16.100456] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:02.635 [2024-07-11 15:27:16.100470] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:02.635 [2024-07-11 15:27:16.100484] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:02.635 [2024-07-11 15:27:16.100511] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:02.635 [2024-07-11 15:27:16.100545] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:02.635 [2024-07-11 15:27:16.100558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.635 [2024-07-11 15:27:16.100572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:02.635 [2024-07-11 15:27:16.100584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:19:02.635 [2024-07-11 15:27:16.100598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.635 [2024-07-11 15:27:16.100705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.635 [2024-07-11 15:27:16.100725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:02.635 [2024-07-11 15:27:16.100739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:02.635 [2024-07-11 15:27:16.100752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.635 [2024-07-11 15:27:16.100885] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:02.635 [2024-07-11 15:27:16.100919] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:02.635 [2024-07-11 15:27:16.100935] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:02.635 [2024-07-11 15:27:16.100950] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.635 [2024-07-11 15:27:16.100962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:02.635 [2024-07-11 15:27:16.100975] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:02.635 [2024-07-11 15:27:16.100986] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:02.635 [2024-07-11 15:27:16.100999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:02.635 [2024-07-11 15:27:16.101010] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:02.635 [2024-07-11 15:27:16.101037] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:02.635 [2024-07-11 15:27:16.101052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:02.635 [2024-07-11 15:27:16.101066] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:02.635 [2024-07-11 15:27:16.101077] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:02.635 [2024-07-11 15:27:16.101092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:02.635 [2024-07-11 15:27:16.101104] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:02.635 [2024-07-11 15:27:16.101116] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.635 [2024-07-11 15:27:16.101127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:02.635 [2024-07-11 15:27:16.101141] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:02.635 [2024-07-11 15:27:16.101152] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.635 [2024-07-11 15:27:16.101165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:02.635 [2024-07-11 15:27:16.101176] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:02.635 [2024-07-11 15:27:16.101189] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.635 [2024-07-11 15:27:16.101200] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:02.635 [2024-07-11 15:27:16.101212] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:02.635 [2024-07-11 15:27:16.101223] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.635 [2024-07-11 15:27:16.101236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:02.635 [2024-07-11 15:27:16.101247] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:02.635 [2024-07-11 15:27:16.101259] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.635 [2024-07-11 15:27:16.101270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:02.635 [2024-07-11 15:27:16.101282] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:02.635 [2024-07-11 15:27:16.101293] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.635 [2024-07-11 15:27:16.101305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:02.635 [2024-07-11 15:27:16.101316] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:02.635 [2024-07-11 15:27:16.101331] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:02.636 [2024-07-11 15:27:16.101342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:02.636 [2024-07-11 15:27:16.101355] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:02.636 [2024-07-11 15:27:16.101366] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:02.636 [2024-07-11 15:27:16.101379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:02.636 [2024-07-11 15:27:16.101389] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:02.636 [2024-07-11 15:27:16.101403] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.636 [2024-07-11 15:27:16.101415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:02.636 [2024-07-11 15:27:16.101427] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:02.636 [2024-07-11 15:27:16.101440] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.636 [2024-07-11 15:27:16.101452] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:02.636 [2024-07-11 15:27:16.101464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:02.636 [2024-07-11 15:27:16.101483] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:02.636 [2024-07-11 15:27:16.101495] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.636 [2024-07-11 15:27:16.101508] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:02.636 [2024-07-11 15:27:16.101520] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:02.636 [2024-07-11 15:27:16.101534] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:02.636 [2024-07-11 15:27:16.101545] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:02.636 [2024-07-11 15:27:16.101558] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:02.636 [2024-07-11 15:27:16.101569] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:02.636 [2024-07-11 15:27:16.101586] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:02.636 [2024-07-11 15:27:16.101610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:02.636 [2024-07-11 15:27:16.101626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:02.636 [2024-07-11 15:27:16.101638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:02.636 [2024-07-11 15:27:16.101652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:02.636 [2024-07-11 15:27:16.101663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:02.636 [2024-07-11 15:27:16.101677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:02.636 [2024-07-11 15:27:16.101689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:02.636 [2024-07-11 15:27:16.101703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:02.636 [2024-07-11 15:27:16.101714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:02.636 [2024-07-11 15:27:16.101730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:02.636 [2024-07-11 15:27:16.101742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:02.636 [2024-07-11 15:27:16.101757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:02.636 [2024-07-11 15:27:16.101769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:02.636 [2024-07-11 15:27:16.101783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:02.636 [2024-07-11 15:27:16.101795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:02.636 [2024-07-11 15:27:16.101809] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:02.636 [2024-07-11 15:27:16.101823] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:02.636 [2024-07-11 15:27:16.101837] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:02.636 [2024-07-11 15:27:16.101850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:02.636 [2024-07-11 15:27:16.101864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:02.636 [2024-07-11 15:27:16.101881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:02.636 [2024-07-11 15:27:16.101897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.636 [2024-07-11 15:27:16.101910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:02.636 [2024-07-11 15:27:16.101925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.076 ms 00:19:02.636 [2024-07-11 15:27:16.101936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.636 [2024-07-11 15:27:16.102054] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:02.636 [2024-07-11 15:27:16.102076] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:04.539 [2024-07-11 15:27:18.088765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.539 [2024-07-11 15:27:18.088853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:04.539 [2024-07-11 15:27:18.088897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1986.713 ms 00:19:04.539 [2024-07-11 15:27:18.088911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.539 [2024-07-11 15:27:18.120711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.539 [2024-07-11 15:27:18.120790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:04.539 [2024-07-11 15:27:18.120831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.443 ms 00:19:04.539 [2024-07-11 15:27:18.120844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.539 [2024-07-11 15:27:18.121072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.539 [2024-07-11 15:27:18.121095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:04.539 [2024-07-11 15:27:18.121111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:19:04.539 [2024-07-11 15:27:18.121126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.799 [2024-07-11 15:27:18.172254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.799 [2024-07-11 15:27:18.172339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:04.799 [2024-07-11 15:27:18.172378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.075 ms 00:19:04.799 [2024-07-11 15:27:18.172395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.799 [2024-07-11 15:27:18.172560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.799 [2024-07-11 15:27:18.172587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:04.799 [2024-07-11 15:27:18.172608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:04.799 [2024-07-11 15:27:18.172623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.799 [2024-07-11 15:27:18.173045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.799 [2024-07-11 15:27:18.173078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:04.799 [2024-07-11 15:27:18.173099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:19:04.799 [2024-07-11 15:27:18.173115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.799 [2024-07-11 15:27:18.173317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.799 [2024-07-11 15:27:18.173351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:04.799 [2024-07-11 15:27:18.173372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:19:04.799 [2024-07-11 15:27:18.173387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.799 [2024-07-11 15:27:18.192271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.799 [2024-07-11 15:27:18.192343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:04.799 [2024-07-11 15:27:18.192366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.830 ms 00:19:04.799 [2024-07-11 15:27:18.192379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.799 [2024-07-11 15:27:18.205699] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:04.799 [2024-07-11 15:27:18.219256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.799 [2024-07-11 15:27:18.219364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:04.799 [2024-07-11 15:27:18.219386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.711 ms 00:19:04.799 [2024-07-11 15:27:18.219400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.799 [2024-07-11 15:27:18.283896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.799 [2024-07-11 15:27:18.283985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:04.799 [2024-07-11 15:27:18.284023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.360 ms 00:19:04.799 [2024-07-11 15:27:18.284049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.799 [2024-07-11 15:27:18.284319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.799 [2024-07-11 15:27:18.284345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:04.799 [2024-07-11 15:27:18.284360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:19:04.799 [2024-07-11 15:27:18.284377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.799 [2024-07-11 15:27:18.317547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.799 [2024-07-11 15:27:18.317607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:04.799 [2024-07-11 15:27:18.317644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.125 ms 00:19:04.799 [2024-07-11 15:27:18.317659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.799 [2024-07-11 15:27:18.347338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.799 [2024-07-11 15:27:18.347402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:04.799 [2024-07-11 15:27:18.347437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.589 ms 00:19:04.799 [2024-07-11 15:27:18.347451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.799 [2024-07-11 15:27:18.348286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.799 [2024-07-11 15:27:18.348340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:04.799 [2024-07-11 15:27:18.348356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:19:04.799 [2024-07-11 15:27:18.348370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.059 [2024-07-11 15:27:18.434390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.059 [2024-07-11 15:27:18.434465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:05.059 [2024-07-11 15:27:18.434486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.972 ms 00:19:05.059 [2024-07-11 15:27:18.434505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.059 [2024-07-11 15:27:18.464995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.059 [2024-07-11 15:27:18.465066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:05.059 [2024-07-11 15:27:18.465101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.383 ms 00:19:05.059 [2024-07-11 15:27:18.465118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.059 [2024-07-11 15:27:18.495022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.059 [2024-07-11 15:27:18.495092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:05.059 [2024-07-11 15:27:18.495127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.806 ms 00:19:05.059 [2024-07-11 15:27:18.495140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.059 [2024-07-11 15:27:18.525304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.059 [2024-07-11 15:27:18.525366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:05.059 [2024-07-11 15:27:18.525400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.070 ms 00:19:05.059 [2024-07-11 15:27:18.525414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.059 [2024-07-11 15:27:18.525514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.059 [2024-07-11 15:27:18.525539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:05.059 [2024-07-11 15:27:18.525570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:05.059 [2024-07-11 15:27:18.525587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.059 [2024-07-11 15:27:18.525681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.059 [2024-07-11 15:27:18.525719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:05.059 [2024-07-11 15:27:18.525734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:05.059 [2024-07-11 15:27:18.525767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.059 [2024-07-11 15:27:18.526749] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:05.059 [2024-07-11 15:27:18.530804] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2442.986 ms, result 0 00:19:05.059 [2024-07-11 15:27:18.531735] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:05.059 { 00:19:05.059 "name": "ftl0", 00:19:05.059 "uuid": "f0bc7114-fa44-4b7b-b47f-5e2eeb48ec58" 00:19:05.059 } 00:19:05.059 15:27:18 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:05.059 15:27:18 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:19:05.059 15:27:18 ftl.ftl_trim -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:05.059 15:27:18 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local i 00:19:05.059 15:27:18 ftl.ftl_trim -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:05.059 15:27:18 ftl.ftl_trim -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:05.059 15:27:18 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:05.318 15:27:18 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:05.577 [ 00:19:05.577 { 00:19:05.577 "name": "ftl0", 00:19:05.577 "aliases": [ 00:19:05.577 "f0bc7114-fa44-4b7b-b47f-5e2eeb48ec58" 00:19:05.577 ], 00:19:05.577 "product_name": "FTL disk", 00:19:05.577 "block_size": 4096, 00:19:05.577 "num_blocks": 23592960, 00:19:05.577 "uuid": "f0bc7114-fa44-4b7b-b47f-5e2eeb48ec58", 00:19:05.577 "assigned_rate_limits": { 00:19:05.577 "rw_ios_per_sec": 0, 00:19:05.577 "rw_mbytes_per_sec": 0, 00:19:05.577 "r_mbytes_per_sec": 0, 00:19:05.577 "w_mbytes_per_sec": 0 00:19:05.577 }, 00:19:05.577 "claimed": false, 00:19:05.577 "zoned": false, 00:19:05.577 "supported_io_types": { 00:19:05.577 "read": true, 00:19:05.577 "write": true, 00:19:05.577 "unmap": true, 00:19:05.577 "flush": true, 00:19:05.577 "reset": false, 00:19:05.577 "nvme_admin": false, 00:19:05.577 "nvme_io": false, 00:19:05.577 "nvme_io_md": false, 00:19:05.577 "write_zeroes": true, 00:19:05.577 "zcopy": false, 00:19:05.577 "get_zone_info": false, 00:19:05.577 "zone_management": false, 00:19:05.577 "zone_append": false, 00:19:05.577 "compare": false, 00:19:05.577 "compare_and_write": false, 00:19:05.577 "abort": false, 00:19:05.577 "seek_hole": false, 00:19:05.577 "seek_data": false, 00:19:05.577 "copy": false, 00:19:05.577 "nvme_iov_md": false 00:19:05.577 }, 00:19:05.577 "driver_specific": { 00:19:05.577 "ftl": { 00:19:05.577 "base_bdev": "920918b6-1273-4679-acd5-56f7c95cc725", 00:19:05.577 "cache": "nvc0n1p0" 00:19:05.577 } 00:19:05.577 } 00:19:05.577 } 00:19:05.577 ] 00:19:05.577 15:27:19 ftl.ftl_trim -- common/autotest_common.sh@905 -- # return 0 00:19:05.577 15:27:19 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:05.577 15:27:19 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:05.836 15:27:19 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:05.836 15:27:19 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:06.095 15:27:19 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:06.095 { 00:19:06.095 "name": "ftl0", 00:19:06.095 "aliases": [ 00:19:06.095 "f0bc7114-fa44-4b7b-b47f-5e2eeb48ec58" 00:19:06.095 ], 00:19:06.095 "product_name": "FTL disk", 00:19:06.095 "block_size": 4096, 00:19:06.095 "num_blocks": 23592960, 00:19:06.095 "uuid": "f0bc7114-fa44-4b7b-b47f-5e2eeb48ec58", 00:19:06.095 "assigned_rate_limits": { 00:19:06.095 "rw_ios_per_sec": 0, 00:19:06.095 "rw_mbytes_per_sec": 0, 00:19:06.095 "r_mbytes_per_sec": 0, 00:19:06.095 "w_mbytes_per_sec": 0 00:19:06.095 }, 00:19:06.095 "claimed": false, 00:19:06.095 "zoned": false, 00:19:06.095 "supported_io_types": { 00:19:06.095 "read": true, 00:19:06.095 "write": true, 00:19:06.095 "unmap": true, 00:19:06.095 "flush": true, 00:19:06.095 "reset": false, 00:19:06.095 "nvme_admin": false, 00:19:06.095 "nvme_io": false, 00:19:06.095 "nvme_io_md": false, 00:19:06.095 "write_zeroes": true, 00:19:06.095 "zcopy": false, 00:19:06.095 "get_zone_info": false, 00:19:06.095 "zone_management": false, 00:19:06.095 "zone_append": false, 00:19:06.095 "compare": false, 00:19:06.095 "compare_and_write": false, 00:19:06.095 "abort": false, 00:19:06.095 "seek_hole": false, 00:19:06.095 "seek_data": false, 00:19:06.095 "copy": false, 00:19:06.095 "nvme_iov_md": false 00:19:06.095 }, 00:19:06.095 "driver_specific": { 00:19:06.095 "ftl": { 00:19:06.095 "base_bdev": "920918b6-1273-4679-acd5-56f7c95cc725", 00:19:06.095 "cache": "nvc0n1p0" 00:19:06.095 } 00:19:06.095 } 00:19:06.095 } 00:19:06.095 ]' 00:19:06.095 15:27:19 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:06.095 15:27:19 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:06.095 15:27:19 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:06.355 [2024-07-11 15:27:19.811015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.355 [2024-07-11 15:27:19.811125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:06.355 [2024-07-11 15:27:19.811152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:06.355 [2024-07-11 15:27:19.811165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.355 [2024-07-11 15:27:19.811218] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:06.355 [2024-07-11 15:27:19.814454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.355 [2024-07-11 15:27:19.814505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:06.355 [2024-07-11 15:27:19.814520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.212 ms 00:19:06.355 [2024-07-11 15:27:19.814539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.355 [2024-07-11 15:27:19.815135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.355 [2024-07-11 15:27:19.815168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:06.355 [2024-07-11 15:27:19.815184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:19:06.355 [2024-07-11 15:27:19.815197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.355 [2024-07-11 15:27:19.818858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.355 [2024-07-11 15:27:19.818908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:06.356 [2024-07-11 15:27:19.818923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.621 ms 00:19:06.356 [2024-07-11 15:27:19.818936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.356 [2024-07-11 15:27:19.826397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.356 [2024-07-11 15:27:19.826459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:06.356 [2024-07-11 15:27:19.826491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.413 ms 00:19:06.356 [2024-07-11 15:27:19.826504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.356 [2024-07-11 15:27:19.856322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.356 [2024-07-11 15:27:19.856400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:06.356 [2024-07-11 15:27:19.856418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.696 ms 00:19:06.356 [2024-07-11 15:27:19.856449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.356 [2024-07-11 15:27:19.875006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.356 [2024-07-11 15:27:19.875082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:06.356 [2024-07-11 15:27:19.875104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.467 ms 00:19:06.356 [2024-07-11 15:27:19.875118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.356 [2024-07-11 15:27:19.875391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.356 [2024-07-11 15:27:19.875422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:06.356 [2024-07-11 15:27:19.875438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:19:06.356 [2024-07-11 15:27:19.875452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.356 [2024-07-11 15:27:19.905722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.356 [2024-07-11 15:27:19.905773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:06.356 [2024-07-11 15:27:19.905791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.232 ms 00:19:06.356 [2024-07-11 15:27:19.905805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.356 [2024-07-11 15:27:19.935986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.356 [2024-07-11 15:27:19.936081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:06.356 [2024-07-11 15:27:19.936101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.083 ms 00:19:06.356 [2024-07-11 15:27:19.936117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.356 [2024-07-11 15:27:19.967350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.356 [2024-07-11 15:27:19.967402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:06.356 [2024-07-11 15:27:19.967421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.136 ms 00:19:06.356 [2024-07-11 15:27:19.967436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.616 [2024-07-11 15:27:19.998024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.616 [2024-07-11 15:27:19.998080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:06.616 [2024-07-11 15:27:19.998098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.437 ms 00:19:06.616 [2024-07-11 15:27:19.998112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.616 [2024-07-11 15:27:19.998208] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:06.616 [2024-07-11 15:27:19.998239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.998999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:06.617 [2024-07-11 15:27:19.999568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:06.618 [2024-07-11 15:27:19.999582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:06.618 [2024-07-11 15:27:19.999594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:06.618 [2024-07-11 15:27:19.999608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:06.618 [2024-07-11 15:27:19.999619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:06.618 [2024-07-11 15:27:19.999633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:06.618 [2024-07-11 15:27:19.999646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:06.618 [2024-07-11 15:27:19.999670] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:06.618 [2024-07-11 15:27:19.999682] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f0bc7114-fa44-4b7b-b47f-5e2eeb48ec58 00:19:06.618 [2024-07-11 15:27:19.999697] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:06.618 [2024-07-11 15:27:19.999708] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:06.618 [2024-07-11 15:27:19.999724] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:06.618 [2024-07-11 15:27:19.999736] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:06.618 [2024-07-11 15:27:19.999748] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:06.618 [2024-07-11 15:27:19.999760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:06.618 [2024-07-11 15:27:19.999774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:06.618 [2024-07-11 15:27:19.999785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:06.618 [2024-07-11 15:27:19.999797] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:06.618 [2024-07-11 15:27:19.999809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.618 [2024-07-11 15:27:19.999823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:06.618 [2024-07-11 15:27:19.999835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.604 ms 00:19:06.618 [2024-07-11 15:27:19.999848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.618 [2024-07-11 15:27:20.026308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.618 [2024-07-11 15:27:20.026384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:06.618 [2024-07-11 15:27:20.026414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.413 ms 00:19:06.618 [2024-07-11 15:27:20.026441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.618 [2024-07-11 15:27:20.027178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.618 [2024-07-11 15:27:20.027221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:06.618 [2024-07-11 15:27:20.027244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:19:06.618 [2024-07-11 15:27:20.027265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.618 [2024-07-11 15:27:20.085250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.618 [2024-07-11 15:27:20.085328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:06.618 [2024-07-11 15:27:20.085347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.618 [2024-07-11 15:27:20.085362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.618 [2024-07-11 15:27:20.085512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.618 [2024-07-11 15:27:20.085537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:06.618 [2024-07-11 15:27:20.085568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.618 [2024-07-11 15:27:20.085581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.618 [2024-07-11 15:27:20.085667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.618 [2024-07-11 15:27:20.085695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:06.618 [2024-07-11 15:27:20.085709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.618 [2024-07-11 15:27:20.085725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.618 [2024-07-11 15:27:20.085765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.618 [2024-07-11 15:27:20.085784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:06.618 [2024-07-11 15:27:20.085803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.618 [2024-07-11 15:27:20.085817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.618 [2024-07-11 15:27:20.188151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.618 [2024-07-11 15:27:20.188209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:06.618 [2024-07-11 15:27:20.188244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.618 [2024-07-11 15:27:20.188258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.877 [2024-07-11 15:27:20.267965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.877 [2024-07-11 15:27:20.268078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:06.877 [2024-07-11 15:27:20.268098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.877 [2024-07-11 15:27:20.268112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.877 [2024-07-11 15:27:20.268232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.877 [2024-07-11 15:27:20.268256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:06.877 [2024-07-11 15:27:20.268273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.877 [2024-07-11 15:27:20.268296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.877 [2024-07-11 15:27:20.268371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.877 [2024-07-11 15:27:20.268389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:06.877 [2024-07-11 15:27:20.268403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.877 [2024-07-11 15:27:20.268416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.877 [2024-07-11 15:27:20.268561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.877 [2024-07-11 15:27:20.268599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:06.877 [2024-07-11 15:27:20.268630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.877 [2024-07-11 15:27:20.268649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.877 [2024-07-11 15:27:20.268723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.877 [2024-07-11 15:27:20.268747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:06.877 [2024-07-11 15:27:20.268761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.877 [2024-07-11 15:27:20.268775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.877 [2024-07-11 15:27:20.268836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.877 [2024-07-11 15:27:20.268868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:06.877 [2024-07-11 15:27:20.268883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.877 [2024-07-11 15:27:20.268902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.877 [2024-07-11 15:27:20.268969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.877 [2024-07-11 15:27:20.268990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:06.877 [2024-07-11 15:27:20.269004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.877 [2024-07-11 15:27:20.269017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.877 [2024-07-11 15:27:20.269246] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 458.222 ms, result 0 00:19:06.877 true 00:19:06.877 15:27:20 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 80001 00:19:06.877 15:27:20 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 80001 ']' 00:19:06.877 15:27:20 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 80001 00:19:06.877 15:27:20 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:19:06.878 15:27:20 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:06.878 15:27:20 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80001 00:19:06.878 15:27:20 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:06.878 15:27:20 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:06.878 killing process with pid 80001 00:19:06.878 15:27:20 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80001' 00:19:06.878 15:27:20 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 80001 00:19:06.878 15:27:20 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 80001 00:19:12.150 15:27:24 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:12.718 65536+0 records in 00:19:12.718 65536+0 records out 00:19:12.718 268435456 bytes (268 MB, 256 MiB) copied, 1.22441 s, 219 MB/s 00:19:12.718 15:27:26 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:12.718 [2024-07-11 15:27:26.166620] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:12.718 [2024-07-11 15:27:26.168105] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80195 ] 00:19:12.976 [2024-07-11 15:27:26.346804] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.976 [2024-07-11 15:27:26.578642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.544 [2024-07-11 15:27:26.894775] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:13.544 [2024-07-11 15:27:26.894897] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:13.544 [2024-07-11 15:27:27.060230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.544 [2024-07-11 15:27:27.060309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:13.544 [2024-07-11 15:27:27.060328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:13.544 [2024-07-11 15:27:27.060340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.544 [2024-07-11 15:27:27.063870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.544 [2024-07-11 15:27:27.063929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:13.544 [2024-07-11 15:27:27.063946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.501 ms 00:19:13.544 [2024-07-11 15:27:27.063957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.544 [2024-07-11 15:27:27.064159] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:13.544 [2024-07-11 15:27:27.065152] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:13.544 [2024-07-11 15:27:27.065190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.544 [2024-07-11 15:27:27.065204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:13.544 [2024-07-11 15:27:27.065218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:19:13.544 [2024-07-11 15:27:27.065230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.544 [2024-07-11 15:27:27.066614] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:13.544 [2024-07-11 15:27:27.084063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.544 [2024-07-11 15:27:27.084164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:13.544 [2024-07-11 15:27:27.084196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.444 ms 00:19:13.544 [2024-07-11 15:27:27.084208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.544 [2024-07-11 15:27:27.084417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.544 [2024-07-11 15:27:27.084455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:13.544 [2024-07-11 15:27:27.084486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:13.544 [2024-07-11 15:27:27.084498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.544 [2024-07-11 15:27:27.089404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.545 [2024-07-11 15:27:27.089493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:13.545 [2024-07-11 15:27:27.089526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.840 ms 00:19:13.545 [2024-07-11 15:27:27.089539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.545 [2024-07-11 15:27:27.089742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.545 [2024-07-11 15:27:27.089769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:13.545 [2024-07-11 15:27:27.089784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:19:13.545 [2024-07-11 15:27:27.089796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.545 [2024-07-11 15:27:27.089846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.545 [2024-07-11 15:27:27.089861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:13.545 [2024-07-11 15:27:27.089874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:13.545 [2024-07-11 15:27:27.089889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.545 [2024-07-11 15:27:27.089927] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:13.545 [2024-07-11 15:27:27.094306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.545 [2024-07-11 15:27:27.094390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:13.545 [2024-07-11 15:27:27.094419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.392 ms 00:19:13.545 [2024-07-11 15:27:27.094439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.545 [2024-07-11 15:27:27.094564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.545 [2024-07-11 15:27:27.094584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:13.545 [2024-07-11 15:27:27.094597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:13.545 [2024-07-11 15:27:27.094608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.545 [2024-07-11 15:27:27.094639] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:13.545 [2024-07-11 15:27:27.094667] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:13.545 [2024-07-11 15:27:27.094773] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:13.545 [2024-07-11 15:27:27.094809] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:13.545 [2024-07-11 15:27:27.094925] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:13.545 [2024-07-11 15:27:27.094940] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:13.545 [2024-07-11 15:27:27.094955] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:13.545 [2024-07-11 15:27:27.094970] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:13.545 [2024-07-11 15:27:27.094983] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:13.545 [2024-07-11 15:27:27.094996] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:13.545 [2024-07-11 15:27:27.095012] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:13.545 [2024-07-11 15:27:27.095023] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:13.545 [2024-07-11 15:27:27.095034] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:13.545 [2024-07-11 15:27:27.095045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.545 [2024-07-11 15:27:27.095057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:13.545 [2024-07-11 15:27:27.095068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:19:13.545 [2024-07-11 15:27:27.095079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.545 [2024-07-11 15:27:27.095189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.545 [2024-07-11 15:27:27.095205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:13.545 [2024-07-11 15:27:27.095217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:19:13.545 [2024-07-11 15:27:27.095233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.545 [2024-07-11 15:27:27.095360] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:13.545 [2024-07-11 15:27:27.095377] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:13.545 [2024-07-11 15:27:27.095389] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:13.545 [2024-07-11 15:27:27.095401] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:13.545 [2024-07-11 15:27:27.095414] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:13.545 [2024-07-11 15:27:27.095425] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:13.545 [2024-07-11 15:27:27.095435] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:13.545 [2024-07-11 15:27:27.095446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:13.545 [2024-07-11 15:27:27.095457] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:13.545 [2024-07-11 15:27:27.095467] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:13.545 [2024-07-11 15:27:27.095484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:13.545 [2024-07-11 15:27:27.095494] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:13.545 [2024-07-11 15:27:27.095504] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:13.545 [2024-07-11 15:27:27.095515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:13.545 [2024-07-11 15:27:27.095526] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:13.545 [2024-07-11 15:27:27.095536] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:13.545 [2024-07-11 15:27:27.095546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:13.545 [2024-07-11 15:27:27.095556] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:13.545 [2024-07-11 15:27:27.095581] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:13.545 [2024-07-11 15:27:27.095592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:13.545 [2024-07-11 15:27:27.095602] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:13.545 [2024-07-11 15:27:27.095613] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:13.545 [2024-07-11 15:27:27.095623] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:13.545 [2024-07-11 15:27:27.095634] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:13.545 [2024-07-11 15:27:27.095644] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:13.545 [2024-07-11 15:27:27.095654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:13.545 [2024-07-11 15:27:27.095664] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:13.545 [2024-07-11 15:27:27.095674] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:13.545 [2024-07-11 15:27:27.095684] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:13.545 [2024-07-11 15:27:27.095696] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:13.545 [2024-07-11 15:27:27.095707] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:13.545 [2024-07-11 15:27:27.095717] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:13.545 [2024-07-11 15:27:27.095728] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:13.545 [2024-07-11 15:27:27.095737] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:13.545 [2024-07-11 15:27:27.095748] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:13.545 [2024-07-11 15:27:27.095759] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:13.545 [2024-07-11 15:27:27.095770] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:13.545 [2024-07-11 15:27:27.095780] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:13.545 [2024-07-11 15:27:27.095791] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:13.545 [2024-07-11 15:27:27.095801] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:13.545 [2024-07-11 15:27:27.095811] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:13.545 [2024-07-11 15:27:27.095821] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:13.545 [2024-07-11 15:27:27.095831] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:13.545 [2024-07-11 15:27:27.095841] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:13.545 [2024-07-11 15:27:27.095852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:13.545 [2024-07-11 15:27:27.095864] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:13.545 [2024-07-11 15:27:27.095875] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:13.545 [2024-07-11 15:27:27.095886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:13.545 [2024-07-11 15:27:27.095897] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:13.545 [2024-07-11 15:27:27.095907] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:13.545 [2024-07-11 15:27:27.095918] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:13.545 [2024-07-11 15:27:27.095928] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:13.545 [2024-07-11 15:27:27.095939] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:13.545 [2024-07-11 15:27:27.095951] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:13.545 [2024-07-11 15:27:27.095970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:13.545 [2024-07-11 15:27:27.095983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:13.545 [2024-07-11 15:27:27.095995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:13.545 [2024-07-11 15:27:27.096006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:13.545 [2024-07-11 15:27:27.096018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:13.545 [2024-07-11 15:27:27.096029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:13.545 [2024-07-11 15:27:27.096056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:13.545 [2024-07-11 15:27:27.096069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:13.545 [2024-07-11 15:27:27.096081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:13.545 [2024-07-11 15:27:27.096093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:13.545 [2024-07-11 15:27:27.096105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:13.545 [2024-07-11 15:27:27.096117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:13.545 [2024-07-11 15:27:27.096129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:13.545 [2024-07-11 15:27:27.096141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:13.546 [2024-07-11 15:27:27.096153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:13.546 [2024-07-11 15:27:27.096164] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:13.546 [2024-07-11 15:27:27.096177] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:13.546 [2024-07-11 15:27:27.096189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:13.546 [2024-07-11 15:27:27.096201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:13.546 [2024-07-11 15:27:27.096213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:13.546 [2024-07-11 15:27:27.096224] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:13.546 [2024-07-11 15:27:27.096237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.546 [2024-07-11 15:27:27.096248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:13.546 [2024-07-11 15:27:27.096261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:19:13.546 [2024-07-11 15:27:27.096271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.546 [2024-07-11 15:27:27.143625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.546 [2024-07-11 15:27:27.143700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:13.546 [2024-07-11 15:27:27.143738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.277 ms 00:19:13.546 [2024-07-11 15:27:27.143752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.546 [2024-07-11 15:27:27.143982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.546 [2024-07-11 15:27:27.144004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:13.546 [2024-07-11 15:27:27.144019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:13.546 [2024-07-11 15:27:27.144064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.805 [2024-07-11 15:27:27.185094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.805 [2024-07-11 15:27:27.185171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:13.805 [2024-07-11 15:27:27.185206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.982 ms 00:19:13.805 [2024-07-11 15:27:27.185217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.805 [2024-07-11 15:27:27.185362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.805 [2024-07-11 15:27:27.185382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:13.805 [2024-07-11 15:27:27.185395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:13.805 [2024-07-11 15:27:27.185407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.805 [2024-07-11 15:27:27.185774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.805 [2024-07-11 15:27:27.185792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:13.805 [2024-07-11 15:27:27.185819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:19:13.805 [2024-07-11 15:27:27.185844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.805 [2024-07-11 15:27:27.186022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.805 [2024-07-11 15:27:27.186070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:13.805 [2024-07-11 15:27:27.186085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:19:13.805 [2024-07-11 15:27:27.186096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.805 [2024-07-11 15:27:27.203178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.805 [2024-07-11 15:27:27.203245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:13.805 [2024-07-11 15:27:27.203283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.047 ms 00:19:13.805 [2024-07-11 15:27:27.203294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.805 [2024-07-11 15:27:27.221042] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:13.805 [2024-07-11 15:27:27.221162] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:13.805 [2024-07-11 15:27:27.221201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.805 [2024-07-11 15:27:27.221228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:13.805 [2024-07-11 15:27:27.221244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.674 ms 00:19:13.805 [2024-07-11 15:27:27.221256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.805 [2024-07-11 15:27:27.252760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.805 [2024-07-11 15:27:27.252890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:13.805 [2024-07-11 15:27:27.252942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.302 ms 00:19:13.805 [2024-07-11 15:27:27.252954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.805 [2024-07-11 15:27:27.270528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.806 [2024-07-11 15:27:27.270627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:13.806 [2024-07-11 15:27:27.270664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.348 ms 00:19:13.806 [2024-07-11 15:27:27.270677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.806 [2024-07-11 15:27:27.288080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.806 [2024-07-11 15:27:27.288183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:13.806 [2024-07-11 15:27:27.288237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.215 ms 00:19:13.806 [2024-07-11 15:27:27.288249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.806 [2024-07-11 15:27:27.289248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.806 [2024-07-11 15:27:27.289291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:13.806 [2024-07-11 15:27:27.289313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:19:13.806 [2024-07-11 15:27:27.289325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.806 [2024-07-11 15:27:27.370657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.806 [2024-07-11 15:27:27.370749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:13.806 [2024-07-11 15:27:27.370801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.289 ms 00:19:13.806 [2024-07-11 15:27:27.370814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.806 [2024-07-11 15:27:27.384510] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:13.806 [2024-07-11 15:27:27.399700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.806 [2024-07-11 15:27:27.399792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:13.806 [2024-07-11 15:27:27.399844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.675 ms 00:19:13.806 [2024-07-11 15:27:27.399855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.806 [2024-07-11 15:27:27.400020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.806 [2024-07-11 15:27:27.400040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:13.806 [2024-07-11 15:27:27.400070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:13.806 [2024-07-11 15:27:27.400132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.806 [2024-07-11 15:27:27.400211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.806 [2024-07-11 15:27:27.400228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:13.806 [2024-07-11 15:27:27.400240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:13.806 [2024-07-11 15:27:27.400251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.806 [2024-07-11 15:27:27.400286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.806 [2024-07-11 15:27:27.400301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:13.806 [2024-07-11 15:27:27.400313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:13.806 [2024-07-11 15:27:27.400323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.806 [2024-07-11 15:27:27.400367] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:13.806 [2024-07-11 15:27:27.400385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.806 [2024-07-11 15:27:27.400397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:13.806 [2024-07-11 15:27:27.400410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:13.806 [2024-07-11 15:27:27.400422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.065 [2024-07-11 15:27:27.435295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.065 [2024-07-11 15:27:27.435369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:14.065 [2024-07-11 15:27:27.435390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.831 ms 00:19:14.065 [2024-07-11 15:27:27.435412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.065 [2024-07-11 15:27:27.435623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.065 [2024-07-11 15:27:27.435645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:14.065 [2024-07-11 15:27:27.435659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:14.065 [2024-07-11 15:27:27.435671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.065 [2024-07-11 15:27:27.436851] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:14.065 [2024-07-11 15:27:27.441837] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 376.239 ms, result 0 00:19:14.065 [2024-07-11 15:27:27.442973] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:14.065 [2024-07-11 15:27:27.461608] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:25.147  Copying: 21/256 [MB] (21 MBps) Copying: 43/256 [MB] (21 MBps) Copying: 65/256 [MB] (21 MBps) Copying: 86/256 [MB] (21 MBps) Copying: 110/256 [MB] (23 MBps) Copying: 134/256 [MB] (23 MBps) Copying: 158/256 [MB] (24 MBps) Copying: 182/256 [MB] (24 MBps) Copying: 206/256 [MB] (23 MBps) Copying: 230/256 [MB] (23 MBps) Copying: 253/256 [MB] (23 MBps) Copying: 256/256 [MB] (average 22 MBps)[2024-07-11 15:27:38.605682] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:25.147 [2024-07-11 15:27:38.619147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.147 [2024-07-11 15:27:38.619252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:25.147 [2024-07-11 15:27:38.619316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:25.147 [2024-07-11 15:27:38.619328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.147 [2024-07-11 15:27:38.619362] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:25.147 [2024-07-11 15:27:38.622941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.147 [2024-07-11 15:27:38.623006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:25.147 [2024-07-11 15:27:38.623051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.557 ms 00:19:25.147 [2024-07-11 15:27:38.623076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.147 [2024-07-11 15:27:38.624994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.147 [2024-07-11 15:27:38.625061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:25.147 [2024-07-11 15:27:38.625078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.885 ms 00:19:25.147 [2024-07-11 15:27:38.625089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.147 [2024-07-11 15:27:38.632684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.147 [2024-07-11 15:27:38.632735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:25.147 [2024-07-11 15:27:38.632751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.569 ms 00:19:25.147 [2024-07-11 15:27:38.632763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.147 [2024-07-11 15:27:38.640911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.147 [2024-07-11 15:27:38.640957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:25.147 [2024-07-11 15:27:38.640985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.015 ms 00:19:25.147 [2024-07-11 15:27:38.641011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.147 [2024-07-11 15:27:38.674940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.147 [2024-07-11 15:27:38.675050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:25.147 [2024-07-11 15:27:38.675072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.815 ms 00:19:25.147 [2024-07-11 15:27:38.675084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.147 [2024-07-11 15:27:38.694540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.147 [2024-07-11 15:27:38.694607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:25.147 [2024-07-11 15:27:38.694628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.334 ms 00:19:25.147 [2024-07-11 15:27:38.694640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.147 [2024-07-11 15:27:38.694909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.147 [2024-07-11 15:27:38.694933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:25.147 [2024-07-11 15:27:38.694946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:19:25.147 [2024-07-11 15:27:38.694957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.147 [2024-07-11 15:27:38.729130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.147 [2024-07-11 15:27:38.729215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:25.147 [2024-07-11 15:27:38.729237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.146 ms 00:19:25.147 [2024-07-11 15:27:38.729249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.406 [2024-07-11 15:27:38.763004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.406 [2024-07-11 15:27:38.763097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:25.406 [2024-07-11 15:27:38.763117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.637 ms 00:19:25.406 [2024-07-11 15:27:38.763129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.406 [2024-07-11 15:27:38.796768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.406 [2024-07-11 15:27:38.796834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:25.406 [2024-07-11 15:27:38.796884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.520 ms 00:19:25.406 [2024-07-11 15:27:38.796895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.406 [2024-07-11 15:27:38.830316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.406 [2024-07-11 15:27:38.830387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:25.406 [2024-07-11 15:27:38.830408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.234 ms 00:19:25.406 [2024-07-11 15:27:38.830420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.406 [2024-07-11 15:27:38.830534] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:25.406 [2024-07-11 15:27:38.830561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.830992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.831004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.831045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.831062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.831074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.831086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.831098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.831110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.831122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.831133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.831156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.831174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.831185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.831196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.831207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.831218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.831229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.831241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.831252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.831263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:25.406 [2024-07-11 15:27:38.831274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:25.407 [2024-07-11 15:27:38.831885] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:25.407 [2024-07-11 15:27:38.831905] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f0bc7114-fa44-4b7b-b47f-5e2eeb48ec58 00:19:25.407 [2024-07-11 15:27:38.831917] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:25.407 [2024-07-11 15:27:38.831928] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:25.407 [2024-07-11 15:27:38.831939] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:25.407 [2024-07-11 15:27:38.831965] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:25.407 [2024-07-11 15:27:38.831977] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:25.407 [2024-07-11 15:27:38.831988] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:25.407 [2024-07-11 15:27:38.831999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:25.407 [2024-07-11 15:27:38.832010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:25.407 [2024-07-11 15:27:38.832020] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:25.407 [2024-07-11 15:27:38.832031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.407 [2024-07-11 15:27:38.832043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:25.407 [2024-07-11 15:27:38.832055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.499 ms 00:19:25.407 [2024-07-11 15:27:38.832066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.407 [2024-07-11 15:27:38.849428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.407 [2024-07-11 15:27:38.849494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:25.407 [2024-07-11 15:27:38.849513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.310 ms 00:19:25.407 [2024-07-11 15:27:38.849525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.407 [2024-07-11 15:27:38.850096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.407 [2024-07-11 15:27:38.850117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:25.407 [2024-07-11 15:27:38.850131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:19:25.407 [2024-07-11 15:27:38.850151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.407 [2024-07-11 15:27:38.891683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.407 [2024-07-11 15:27:38.891752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:25.407 [2024-07-11 15:27:38.891787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.407 [2024-07-11 15:27:38.891813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.407 [2024-07-11 15:27:38.891938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.407 [2024-07-11 15:27:38.891953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:25.407 [2024-07-11 15:27:38.891964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.407 [2024-07-11 15:27:38.891982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.407 [2024-07-11 15:27:38.892082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.407 [2024-07-11 15:27:38.892137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:25.407 [2024-07-11 15:27:38.892153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.407 [2024-07-11 15:27:38.892164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.407 [2024-07-11 15:27:38.892190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.407 [2024-07-11 15:27:38.892204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:25.407 [2024-07-11 15:27:38.892215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.407 [2024-07-11 15:27:38.892226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.407 [2024-07-11 15:27:38.995667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.407 [2024-07-11 15:27:38.995738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:25.407 [2024-07-11 15:27:38.995758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.407 [2024-07-11 15:27:38.995771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.666 [2024-07-11 15:27:39.086337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.666 [2024-07-11 15:27:39.086438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:25.666 [2024-07-11 15:27:39.086487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.666 [2024-07-11 15:27:39.086499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.666 [2024-07-11 15:27:39.086604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.666 [2024-07-11 15:27:39.086622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:25.666 [2024-07-11 15:27:39.086634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.666 [2024-07-11 15:27:39.086646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.666 [2024-07-11 15:27:39.086682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.666 [2024-07-11 15:27:39.086695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:25.666 [2024-07-11 15:27:39.086707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.666 [2024-07-11 15:27:39.086718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.666 [2024-07-11 15:27:39.086845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.666 [2024-07-11 15:27:39.086865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:25.666 [2024-07-11 15:27:39.086878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.666 [2024-07-11 15:27:39.086889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.666 [2024-07-11 15:27:39.086941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.666 [2024-07-11 15:27:39.086958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:25.666 [2024-07-11 15:27:39.086971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.666 [2024-07-11 15:27:39.086982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.666 [2024-07-11 15:27:39.087040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.666 [2024-07-11 15:27:39.087118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:25.666 [2024-07-11 15:27:39.087131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.666 [2024-07-11 15:27:39.087142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.666 [2024-07-11 15:27:39.087199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.666 [2024-07-11 15:27:39.087215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:25.666 [2024-07-11 15:27:39.087226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.666 [2024-07-11 15:27:39.087238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.666 [2024-07-11 15:27:39.087432] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 468.301 ms, result 0 00:19:26.600 00:19:26.600 00:19:26.600 15:27:40 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=80342 00:19:26.600 15:27:40 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:26.600 15:27:40 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 80342 00:19:26.600 15:27:40 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 80342 ']' 00:19:26.600 15:27:40 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.600 15:27:40 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:26.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.600 15:27:40 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.600 15:27:40 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:26.600 15:27:40 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:26.858 [2024-07-11 15:27:40.336224] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:26.858 [2024-07-11 15:27:40.336407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80342 ] 00:19:27.117 [2024-07-11 15:27:40.505899] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.117 [2024-07-11 15:27:40.697286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.053 15:27:41 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:28.053 15:27:41 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:19:28.053 15:27:41 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:28.312 [2024-07-11 15:27:41.698299] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:28.312 [2024-07-11 15:27:41.698390] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:28.312 [2024-07-11 15:27:41.866604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.312 [2024-07-11 15:27:41.866675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:28.312 [2024-07-11 15:27:41.866713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:28.312 [2024-07-11 15:27:41.866729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.312 [2024-07-11 15:27:41.870152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.312 [2024-07-11 15:27:41.870199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:28.312 [2024-07-11 15:27:41.870217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.395 ms 00:19:28.312 [2024-07-11 15:27:41.870233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.312 [2024-07-11 15:27:41.870374] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:28.312 [2024-07-11 15:27:41.871341] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:28.312 [2024-07-11 15:27:41.871381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.312 [2024-07-11 15:27:41.871416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:28.312 [2024-07-11 15:27:41.871430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.023 ms 00:19:28.312 [2024-07-11 15:27:41.871444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.312 [2024-07-11 15:27:41.872706] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:28.312 [2024-07-11 15:27:41.889381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.312 [2024-07-11 15:27:41.889451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:28.312 [2024-07-11 15:27:41.889491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.680 ms 00:19:28.312 [2024-07-11 15:27:41.889506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.312 [2024-07-11 15:27:41.889699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.312 [2024-07-11 15:27:41.889721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:28.312 [2024-07-11 15:27:41.889738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:28.312 [2024-07-11 15:27:41.889751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.312 [2024-07-11 15:27:41.894475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.312 [2024-07-11 15:27:41.894547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:28.312 [2024-07-11 15:27:41.894589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.651 ms 00:19:28.312 [2024-07-11 15:27:41.894602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.312 [2024-07-11 15:27:41.894763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.312 [2024-07-11 15:27:41.894784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:28.312 [2024-07-11 15:27:41.894799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:28.312 [2024-07-11 15:27:41.894811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.312 [2024-07-11 15:27:41.894887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.312 [2024-07-11 15:27:41.894902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:28.312 [2024-07-11 15:27:41.894916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:28.312 [2024-07-11 15:27:41.894943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.312 [2024-07-11 15:27:41.895012] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:28.312 [2024-07-11 15:27:41.899390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.312 [2024-07-11 15:27:41.899432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:28.312 [2024-07-11 15:27:41.899448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.391 ms 00:19:28.312 [2024-07-11 15:27:41.899463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.312 [2024-07-11 15:27:41.899533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.312 [2024-07-11 15:27:41.899557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:28.312 [2024-07-11 15:27:41.899571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:28.312 [2024-07-11 15:27:41.899589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.312 [2024-07-11 15:27:41.899658] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:28.312 [2024-07-11 15:27:41.899688] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:28.312 [2024-07-11 15:27:41.899738] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:28.312 [2024-07-11 15:27:41.899765] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:28.312 [2024-07-11 15:27:41.899880] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:28.312 [2024-07-11 15:27:41.899900] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:28.312 [2024-07-11 15:27:41.899918] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:28.312 [2024-07-11 15:27:41.899935] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:28.312 [2024-07-11 15:27:41.899949] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:28.312 [2024-07-11 15:27:41.899964] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:28.312 [2024-07-11 15:27:41.899976] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:28.312 [2024-07-11 15:27:41.899988] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:28.312 [2024-07-11 15:27:41.899999] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:28.312 [2024-07-11 15:27:41.900016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.312 [2024-07-11 15:27:41.900028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:28.312 [2024-07-11 15:27:41.900042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:19:28.312 [2024-07-11 15:27:41.900096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.312 [2024-07-11 15:27:41.900197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.312 [2024-07-11 15:27:41.900213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:28.312 [2024-07-11 15:27:41.900227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:28.312 [2024-07-11 15:27:41.900239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.313 [2024-07-11 15:27:41.900389] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:28.313 [2024-07-11 15:27:41.900409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:28.313 [2024-07-11 15:27:41.900425] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:28.313 [2024-07-11 15:27:41.900438] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.313 [2024-07-11 15:27:41.900453] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:28.313 [2024-07-11 15:27:41.900465] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:28.313 [2024-07-11 15:27:41.900480] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:28.313 [2024-07-11 15:27:41.900492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:28.313 [2024-07-11 15:27:41.900508] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:28.313 [2024-07-11 15:27:41.900521] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:28.313 [2024-07-11 15:27:41.900535] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:28.313 [2024-07-11 15:27:41.900547] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:28.313 [2024-07-11 15:27:41.900561] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:28.313 [2024-07-11 15:27:41.900573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:28.313 [2024-07-11 15:27:41.900586] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:28.313 [2024-07-11 15:27:41.900598] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.313 [2024-07-11 15:27:41.900612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:28.313 [2024-07-11 15:27:41.900623] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:28.313 [2024-07-11 15:27:41.900637] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.313 [2024-07-11 15:27:41.900649] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:28.313 [2024-07-11 15:27:41.900663] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:28.313 [2024-07-11 15:27:41.900675] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.313 [2024-07-11 15:27:41.900690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:28.313 [2024-07-11 15:27:41.900702] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:28.313 [2024-07-11 15:27:41.900718] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.313 [2024-07-11 15:27:41.900729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:28.313 [2024-07-11 15:27:41.900743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:28.313 [2024-07-11 15:27:41.900766] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.313 [2024-07-11 15:27:41.900781] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:28.313 [2024-07-11 15:27:41.900793] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:28.313 [2024-07-11 15:27:41.900808] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.313 [2024-07-11 15:27:41.900820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:28.313 [2024-07-11 15:27:41.900833] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:28.313 [2024-07-11 15:27:41.900845] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:28.313 [2024-07-11 15:27:41.900859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:28.313 [2024-07-11 15:27:41.900871] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:28.313 [2024-07-11 15:27:41.900884] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:28.313 [2024-07-11 15:27:41.900896] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:28.313 [2024-07-11 15:27:41.900925] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:28.313 [2024-07-11 15:27:41.900936] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.313 [2024-07-11 15:27:41.900952] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:28.313 [2024-07-11 15:27:41.900963] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:28.313 [2024-07-11 15:27:41.900976] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.313 [2024-07-11 15:27:41.900987] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:28.313 [2024-07-11 15:27:41.901005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:28.313 [2024-07-11 15:27:41.901016] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:28.313 [2024-07-11 15:27:41.901030] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.313 [2024-07-11 15:27:41.901043] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:28.313 [2024-07-11 15:27:41.901067] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:28.313 [2024-07-11 15:27:41.901080] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:28.313 [2024-07-11 15:27:41.901109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:28.313 [2024-07-11 15:27:41.901123] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:28.313 [2024-07-11 15:27:41.901138] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:28.313 [2024-07-11 15:27:41.901151] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:28.313 [2024-07-11 15:27:41.901170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:28.313 [2024-07-11 15:27:41.901185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:28.313 [2024-07-11 15:27:41.901204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:28.313 [2024-07-11 15:27:41.901216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:28.313 [2024-07-11 15:27:41.901231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:28.313 [2024-07-11 15:27:41.901243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:28.313 [2024-07-11 15:27:41.901257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:28.313 [2024-07-11 15:27:41.901269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:28.313 [2024-07-11 15:27:41.901283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:28.313 [2024-07-11 15:27:41.901295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:28.313 [2024-07-11 15:27:41.901309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:28.313 [2024-07-11 15:27:41.901322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:28.313 [2024-07-11 15:27:41.901336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:28.313 [2024-07-11 15:27:41.901349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:28.313 [2024-07-11 15:27:41.901363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:28.313 [2024-07-11 15:27:41.901376] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:28.313 [2024-07-11 15:27:41.901392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:28.313 [2024-07-11 15:27:41.901405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:28.313 [2024-07-11 15:27:41.901422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:28.313 [2024-07-11 15:27:41.901435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:28.313 [2024-07-11 15:27:41.901449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:28.313 [2024-07-11 15:27:41.901463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.313 [2024-07-11 15:27:41.901477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:28.313 [2024-07-11 15:27:41.901490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.174 ms 00:19:28.313 [2024-07-11 15:27:41.901504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.573 [2024-07-11 15:27:41.934633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.573 [2024-07-11 15:27:41.934719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:28.573 [2024-07-11 15:27:41.934741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.045 ms 00:19:28.573 [2024-07-11 15:27:41.934761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.573 [2024-07-11 15:27:41.934956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.573 [2024-07-11 15:27:41.934980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:28.573 [2024-07-11 15:27:41.934994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:28.573 [2024-07-11 15:27:41.935009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.573 [2024-07-11 15:27:41.973580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.573 [2024-07-11 15:27:41.973688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.573 [2024-07-11 15:27:41.973711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.521 ms 00:19:28.573 [2024-07-11 15:27:41.973726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.573 [2024-07-11 15:27:41.973854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.573 [2024-07-11 15:27:41.973891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.573 [2024-07-11 15:27:41.973917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:28.573 [2024-07-11 15:27:41.973946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.573 [2024-07-11 15:27:41.974319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.573 [2024-07-11 15:27:41.974342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.573 [2024-07-11 15:27:41.974377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:19:28.573 [2024-07-11 15:27:41.974392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.573 [2024-07-11 15:27:41.974546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.573 [2024-07-11 15:27:41.974574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.573 [2024-07-11 15:27:41.974589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:19:28.573 [2024-07-11 15:27:41.974603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.573 [2024-07-11 15:27:41.992461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.573 [2024-07-11 15:27:41.992548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.573 [2024-07-11 15:27:41.992570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.812 ms 00:19:28.573 [2024-07-11 15:27:41.992585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.573 [2024-07-11 15:27:42.009344] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:28.573 [2024-07-11 15:27:42.009456] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:28.573 [2024-07-11 15:27:42.009494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.573 [2024-07-11 15:27:42.009526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:28.573 [2024-07-11 15:27:42.009542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.731 ms 00:19:28.573 [2024-07-11 15:27:42.009556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.573 [2024-07-11 15:27:42.039770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.573 [2024-07-11 15:27:42.039899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:28.573 [2024-07-11 15:27:42.039923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.068 ms 00:19:28.573 [2024-07-11 15:27:42.039955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.573 [2024-07-11 15:27:42.057040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.573 [2024-07-11 15:27:42.057156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:28.573 [2024-07-11 15:27:42.057192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.878 ms 00:19:28.573 [2024-07-11 15:27:42.057211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.573 [2024-07-11 15:27:42.074183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.573 [2024-07-11 15:27:42.074262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:28.573 [2024-07-11 15:27:42.074285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.821 ms 00:19:28.573 [2024-07-11 15:27:42.074300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.573 [2024-07-11 15:27:42.075332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.573 [2024-07-11 15:27:42.075371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:28.573 [2024-07-11 15:27:42.075387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:19:28.573 [2024-07-11 15:27:42.075403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.573 [2024-07-11 15:27:42.163784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.573 [2024-07-11 15:27:42.163901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:28.573 [2024-07-11 15:27:42.163940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.346 ms 00:19:28.573 [2024-07-11 15:27:42.163972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.573 [2024-07-11 15:27:42.178025] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:28.831 [2024-07-11 15:27:42.193090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.831 [2024-07-11 15:27:42.193163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:28.831 [2024-07-11 15:27:42.193195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.906 ms 00:19:28.831 [2024-07-11 15:27:42.193209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.831 [2024-07-11 15:27:42.193355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.831 [2024-07-11 15:27:42.193376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:28.831 [2024-07-11 15:27:42.193393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:28.831 [2024-07-11 15:27:42.193406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.831 [2024-07-11 15:27:42.193479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.831 [2024-07-11 15:27:42.193496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:28.831 [2024-07-11 15:27:42.193512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:28.831 [2024-07-11 15:27:42.193529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.831 [2024-07-11 15:27:42.193567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.831 [2024-07-11 15:27:42.193582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:28.831 [2024-07-11 15:27:42.193598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:28.831 [2024-07-11 15:27:42.193610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.831 [2024-07-11 15:27:42.193651] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:28.831 [2024-07-11 15:27:42.193673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.831 [2024-07-11 15:27:42.193690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:28.831 [2024-07-11 15:27:42.193705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:28.831 [2024-07-11 15:27:42.193720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.831 [2024-07-11 15:27:42.227476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.831 [2024-07-11 15:27:42.227585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:28.831 [2024-07-11 15:27:42.227623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.719 ms 00:19:28.831 [2024-07-11 15:27:42.227638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.831 [2024-07-11 15:27:42.227852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.831 [2024-07-11 15:27:42.227877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:28.831 [2024-07-11 15:27:42.227891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:28.831 [2024-07-11 15:27:42.227923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.831 [2024-07-11 15:27:42.229233] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:28.831 [2024-07-11 15:27:42.233894] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 362.199 ms, result 0 00:19:28.831 [2024-07-11 15:27:42.235359] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:28.831 Some configs were skipped because the RPC state that can call them passed over. 00:19:28.831 15:27:42 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:29.090 [2024-07-11 15:27:42.557963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.090 [2024-07-11 15:27:42.558057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:29.090 [2024-07-11 15:27:42.558090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.599 ms 00:19:29.090 [2024-07-11 15:27:42.558104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.090 [2024-07-11 15:27:42.558156] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.822 ms, result 0 00:19:29.090 true 00:19:29.090 15:27:42 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:29.349 [2024-07-11 15:27:42.838032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.349 [2024-07-11 15:27:42.838106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:29.349 [2024-07-11 15:27:42.838144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:19:29.349 [2024-07-11 15:27:42.838162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.349 [2024-07-11 15:27:42.838221] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.494 ms, result 0 00:19:29.349 true 00:19:29.349 15:27:42 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 80342 00:19:29.349 15:27:42 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 80342 ']' 00:19:29.349 15:27:42 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 80342 00:19:29.349 15:27:42 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:19:29.349 15:27:42 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:29.349 15:27:42 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80342 00:19:29.349 killing process with pid 80342 00:19:29.349 15:27:42 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:29.349 15:27:42 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:29.349 15:27:42 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80342' 00:19:29.349 15:27:42 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 80342 00:19:29.349 15:27:42 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 80342 00:19:30.284 [2024-07-11 15:27:43.855576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.284 [2024-07-11 15:27:43.855692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:30.284 [2024-07-11 15:27:43.855732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:30.284 [2024-07-11 15:27:43.855745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.284 [2024-07-11 15:27:43.855782] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:30.284 [2024-07-11 15:27:43.859375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.284 [2024-07-11 15:27:43.859430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:30.284 [2024-07-11 15:27:43.859478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.571 ms 00:19:30.284 [2024-07-11 15:27:43.859494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.284 [2024-07-11 15:27:43.859837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.284 [2024-07-11 15:27:43.859861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:30.284 [2024-07-11 15:27:43.859876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:19:30.284 [2024-07-11 15:27:43.859890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.284 [2024-07-11 15:27:43.864177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.284 [2024-07-11 15:27:43.864239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:30.284 [2024-07-11 15:27:43.864260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.262 ms 00:19:30.284 [2024-07-11 15:27:43.864275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.284 [2024-07-11 15:27:43.872286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.284 [2024-07-11 15:27:43.872348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:30.284 [2024-07-11 15:27:43.872395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.952 ms 00:19:30.284 [2024-07-11 15:27:43.872412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.284 [2024-07-11 15:27:43.885490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.285 [2024-07-11 15:27:43.885567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:30.285 [2024-07-11 15:27:43.885588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.992 ms 00:19:30.285 [2024-07-11 15:27:43.885607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.285 [2024-07-11 15:27:43.894928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.285 [2024-07-11 15:27:43.895015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:30.285 [2024-07-11 15:27:43.895067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.239 ms 00:19:30.285 [2024-07-11 15:27:43.895084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.285 [2024-07-11 15:27:43.895267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.285 [2024-07-11 15:27:43.895292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:30.285 [2024-07-11 15:27:43.895307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:19:30.285 [2024-07-11 15:27:43.895336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.545 [2024-07-11 15:27:43.908840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.545 [2024-07-11 15:27:43.908946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:30.545 [2024-07-11 15:27:43.908966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.473 ms 00:19:30.545 [2024-07-11 15:27:43.908991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.545 [2024-07-11 15:27:43.922554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.545 [2024-07-11 15:27:43.922636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:30.545 [2024-07-11 15:27:43.922655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.480 ms 00:19:30.545 [2024-07-11 15:27:43.922677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.545 [2024-07-11 15:27:43.935487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.545 [2024-07-11 15:27:43.935570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:30.545 [2024-07-11 15:27:43.935588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.760 ms 00:19:30.545 [2024-07-11 15:27:43.935602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.545 [2024-07-11 15:27:43.948433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.545 [2024-07-11 15:27:43.948501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:30.545 [2024-07-11 15:27:43.948519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.753 ms 00:19:30.545 [2024-07-11 15:27:43.948533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.545 [2024-07-11 15:27:43.948595] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:30.545 [2024-07-11 15:27:43.948649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.948999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:30.545 [2024-07-11 15:27:43.949845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:30.546 [2024-07-11 15:27:43.949859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:30.546 [2024-07-11 15:27:43.949883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:30.546 [2024-07-11 15:27:43.949897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:30.546 [2024-07-11 15:27:43.949910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:30.546 [2024-07-11 15:27:43.949924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:30.546 [2024-07-11 15:27:43.949936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:30.546 [2024-07-11 15:27:43.949952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:30.546 [2024-07-11 15:27:43.949965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:30.546 [2024-07-11 15:27:43.949979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:30.546 [2024-07-11 15:27:43.950002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:30.546 [2024-07-11 15:27:43.950028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:30.546 [2024-07-11 15:27:43.950055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:30.546 [2024-07-11 15:27:43.950070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:30.546 [2024-07-11 15:27:43.950083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:30.546 [2024-07-11 15:27:43.950097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:30.546 [2024-07-11 15:27:43.950110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:30.546 [2024-07-11 15:27:43.950126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:30.546 [2024-07-11 15:27:43.950140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:30.546 [2024-07-11 15:27:43.950155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:30.546 [2024-07-11 15:27:43.950167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:30.546 [2024-07-11 15:27:43.950192] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:30.546 [2024-07-11 15:27:43.950208] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f0bc7114-fa44-4b7b-b47f-5e2eeb48ec58 00:19:30.546 [2024-07-11 15:27:43.950225] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:30.546 [2024-07-11 15:27:43.950237] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:30.546 [2024-07-11 15:27:43.950250] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:30.546 [2024-07-11 15:27:43.950263] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:30.546 [2024-07-11 15:27:43.950276] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:30.546 [2024-07-11 15:27:43.950289] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:30.546 [2024-07-11 15:27:43.950303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:30.546 [2024-07-11 15:27:43.950314] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:30.546 [2024-07-11 15:27:43.950343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:30.546 [2024-07-11 15:27:43.950355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.546 [2024-07-11 15:27:43.950370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:30.546 [2024-07-11 15:27:43.950383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.762 ms 00:19:30.546 [2024-07-11 15:27:43.950397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.546 [2024-07-11 15:27:43.967588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.546 [2024-07-11 15:27:43.967677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:30.546 [2024-07-11 15:27:43.967698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.142 ms 00:19:30.546 [2024-07-11 15:27:43.967716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.546 [2024-07-11 15:27:43.968321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.546 [2024-07-11 15:27:43.968352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:30.546 [2024-07-11 15:27:43.968397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:19:30.546 [2024-07-11 15:27:43.968427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.546 [2024-07-11 15:27:44.026555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.546 [2024-07-11 15:27:44.026646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:30.546 [2024-07-11 15:27:44.026666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.546 [2024-07-11 15:27:44.026681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.546 [2024-07-11 15:27:44.026835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.546 [2024-07-11 15:27:44.026857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:30.546 [2024-07-11 15:27:44.026873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.546 [2024-07-11 15:27:44.026903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.546 [2024-07-11 15:27:44.026993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.546 [2024-07-11 15:27:44.027017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:30.546 [2024-07-11 15:27:44.027030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.546 [2024-07-11 15:27:44.027047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.546 [2024-07-11 15:27:44.027073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.546 [2024-07-11 15:27:44.027116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:30.546 [2024-07-11 15:27:44.027130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.546 [2024-07-11 15:27:44.027147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.546 [2024-07-11 15:27:44.132379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.546 [2024-07-11 15:27:44.132501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:30.546 [2024-07-11 15:27:44.132552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.546 [2024-07-11 15:27:44.132567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.806 [2024-07-11 15:27:44.221176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.806 [2024-07-11 15:27:44.221246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:30.806 [2024-07-11 15:27:44.221267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.806 [2024-07-11 15:27:44.221287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.806 [2024-07-11 15:27:44.221395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.806 [2024-07-11 15:27:44.221417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:30.806 [2024-07-11 15:27:44.221431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.806 [2024-07-11 15:27:44.221449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.806 [2024-07-11 15:27:44.221502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.806 [2024-07-11 15:27:44.221520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:30.806 [2024-07-11 15:27:44.221532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.806 [2024-07-11 15:27:44.221546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.806 [2024-07-11 15:27:44.221680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.806 [2024-07-11 15:27:44.221704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:30.806 [2024-07-11 15:27:44.221718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.806 [2024-07-11 15:27:44.221732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.806 [2024-07-11 15:27:44.221783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.806 [2024-07-11 15:27:44.221805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:30.806 [2024-07-11 15:27:44.221833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.806 [2024-07-11 15:27:44.221848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.806 [2024-07-11 15:27:44.221900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.806 [2024-07-11 15:27:44.221919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:30.806 [2024-07-11 15:27:44.221932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.806 [2024-07-11 15:27:44.221948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.806 [2024-07-11 15:27:44.222032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.806 [2024-07-11 15:27:44.222080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:30.806 [2024-07-11 15:27:44.222094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.806 [2024-07-11 15:27:44.222108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.806 [2024-07-11 15:27:44.222273] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 366.680 ms, result 0 00:19:31.745 15:27:45 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:31.745 15:27:45 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:31.745 [2024-07-11 15:27:45.277133] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:31.745 [2024-07-11 15:27:45.277298] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80400 ] 00:19:32.004 [2024-07-11 15:27:45.450561] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.263 [2024-07-11 15:27:45.633519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.524 [2024-07-11 15:27:45.941046] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:32.524 [2024-07-11 15:27:45.941177] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:32.524 [2024-07-11 15:27:46.100344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.524 [2024-07-11 15:27:46.100416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:32.524 [2024-07-11 15:27:46.100452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:32.524 [2024-07-11 15:27:46.100463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.524 [2024-07-11 15:27:46.103645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.524 [2024-07-11 15:27:46.103686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:32.524 [2024-07-11 15:27:46.103718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.155 ms 00:19:32.524 [2024-07-11 15:27:46.103728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.524 [2024-07-11 15:27:46.103867] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:32.524 [2024-07-11 15:27:46.104895] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:32.524 [2024-07-11 15:27:46.104935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.524 [2024-07-11 15:27:46.104965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:32.524 [2024-07-11 15:27:46.104977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:19:32.524 [2024-07-11 15:27:46.104987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.524 [2024-07-11 15:27:46.106262] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:32.524 [2024-07-11 15:27:46.121142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.524 [2024-07-11 15:27:46.121181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:32.524 [2024-07-11 15:27:46.121219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.882 ms 00:19:32.524 [2024-07-11 15:27:46.121230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.524 [2024-07-11 15:27:46.121340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.524 [2024-07-11 15:27:46.121361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:32.524 [2024-07-11 15:27:46.121373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:32.524 [2024-07-11 15:27:46.121383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.524 [2024-07-11 15:27:46.125710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.524 [2024-07-11 15:27:46.125756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:32.524 [2024-07-11 15:27:46.125802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.272 ms 00:19:32.524 [2024-07-11 15:27:46.125813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.524 [2024-07-11 15:27:46.125933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.524 [2024-07-11 15:27:46.125954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:32.524 [2024-07-11 15:27:46.125967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:32.524 [2024-07-11 15:27:46.125977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.524 [2024-07-11 15:27:46.126065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.524 [2024-07-11 15:27:46.126085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:32.524 [2024-07-11 15:27:46.126099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:32.524 [2024-07-11 15:27:46.126114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.525 [2024-07-11 15:27:46.126153] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:32.525 [2024-07-11 15:27:46.130373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.525 [2024-07-11 15:27:46.130430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:32.525 [2024-07-11 15:27:46.130462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.229 ms 00:19:32.525 [2024-07-11 15:27:46.130472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.525 [2024-07-11 15:27:46.130563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.525 [2024-07-11 15:27:46.130582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:32.525 [2024-07-11 15:27:46.130594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:32.525 [2024-07-11 15:27:46.130605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.525 [2024-07-11 15:27:46.130637] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:32.525 [2024-07-11 15:27:46.130667] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:32.525 [2024-07-11 15:27:46.130714] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:32.525 [2024-07-11 15:27:46.130736] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:32.525 [2024-07-11 15:27:46.130840] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:32.525 [2024-07-11 15:27:46.130855] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:32.525 [2024-07-11 15:27:46.130869] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:32.525 [2024-07-11 15:27:46.130884] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:32.525 [2024-07-11 15:27:46.130897] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:32.525 [2024-07-11 15:27:46.130909] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:32.525 [2024-07-11 15:27:46.130924] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:32.525 [2024-07-11 15:27:46.130935] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:32.525 [2024-07-11 15:27:46.130946] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:32.525 [2024-07-11 15:27:46.130989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.525 [2024-07-11 15:27:46.131000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:32.525 [2024-07-11 15:27:46.131012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:19:32.525 [2024-07-11 15:27:46.131023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.525 [2024-07-11 15:27:46.131131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.525 [2024-07-11 15:27:46.131150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:32.525 [2024-07-11 15:27:46.131162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:19:32.525 [2024-07-11 15:27:46.131178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.525 [2024-07-11 15:27:46.131289] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:32.525 [2024-07-11 15:27:46.131308] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:32.525 [2024-07-11 15:27:46.131336] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:32.525 [2024-07-11 15:27:46.131348] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.525 [2024-07-11 15:27:46.131359] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:32.525 [2024-07-11 15:27:46.131370] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:32.525 [2024-07-11 15:27:46.131381] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:32.525 [2024-07-11 15:27:46.131392] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:32.525 [2024-07-11 15:27:46.131403] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:32.525 [2024-07-11 15:27:46.131424] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:32.525 [2024-07-11 15:27:46.131434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:32.525 [2024-07-11 15:27:46.131444] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:32.525 [2024-07-11 15:27:46.131465] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:32.525 [2024-07-11 15:27:46.131475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:32.525 [2024-07-11 15:27:46.131486] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:32.525 [2024-07-11 15:27:46.131495] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.525 [2024-07-11 15:27:46.131506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:32.525 [2024-07-11 15:27:46.131517] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:32.525 [2024-07-11 15:27:46.131543] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.525 [2024-07-11 15:27:46.131554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:32.525 [2024-07-11 15:27:46.131565] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:32.525 [2024-07-11 15:27:46.131576] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.525 [2024-07-11 15:27:46.131586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:32.525 [2024-07-11 15:27:46.131596] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:32.525 [2024-07-11 15:27:46.131606] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.525 [2024-07-11 15:27:46.131616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:32.525 [2024-07-11 15:27:46.131626] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:32.525 [2024-07-11 15:27:46.131636] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.525 [2024-07-11 15:27:46.131646] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:32.525 [2024-07-11 15:27:46.131656] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:32.525 [2024-07-11 15:27:46.131666] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.525 [2024-07-11 15:27:46.131676] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:32.525 [2024-07-11 15:27:46.131686] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:32.525 [2024-07-11 15:27:46.131696] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:32.525 [2024-07-11 15:27:46.131706] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:32.525 [2024-07-11 15:27:46.131717] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:32.525 [2024-07-11 15:27:46.131727] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:32.525 [2024-07-11 15:27:46.131737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:32.525 [2024-07-11 15:27:46.131747] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:32.525 [2024-07-11 15:27:46.131757] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.525 [2024-07-11 15:27:46.131767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:32.525 [2024-07-11 15:27:46.131792] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:32.525 [2024-07-11 15:27:46.131818] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.525 [2024-07-11 15:27:46.131828] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:32.525 [2024-07-11 15:27:46.131839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:32.525 [2024-07-11 15:27:46.131849] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:32.525 [2024-07-11 15:27:46.131859] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.525 [2024-07-11 15:27:46.131870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:32.525 [2024-07-11 15:27:46.131880] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:32.525 [2024-07-11 15:27:46.131889] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:32.525 [2024-07-11 15:27:46.131899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:32.525 [2024-07-11 15:27:46.131909] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:32.525 [2024-07-11 15:27:46.131919] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:32.525 [2024-07-11 15:27:46.131930] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:32.525 [2024-07-11 15:27:46.131948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:32.525 [2024-07-11 15:27:46.131960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:32.525 [2024-07-11 15:27:46.131971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:32.525 [2024-07-11 15:27:46.131982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:32.525 [2024-07-11 15:27:46.131992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:32.525 [2024-07-11 15:27:46.132003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:32.525 [2024-07-11 15:27:46.132013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:32.525 [2024-07-11 15:27:46.132023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:32.525 [2024-07-11 15:27:46.132034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:32.525 [2024-07-11 15:27:46.132044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:32.525 [2024-07-11 15:27:46.132055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:32.525 [2024-07-11 15:27:46.132065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:32.525 [2024-07-11 15:27:46.132075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:32.525 [2024-07-11 15:27:46.132100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:32.525 [2024-07-11 15:27:46.132113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:32.525 [2024-07-11 15:27:46.132124] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:32.525 [2024-07-11 15:27:46.132135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:32.525 [2024-07-11 15:27:46.132148] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:32.525 [2024-07-11 15:27:46.132159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:32.525 [2024-07-11 15:27:46.132170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:32.525 [2024-07-11 15:27:46.132180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:32.525 [2024-07-11 15:27:46.132192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.525 [2024-07-11 15:27:46.132204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:32.526 [2024-07-11 15:27:46.132215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:19:32.526 [2024-07-11 15:27:46.132226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.785 [2024-07-11 15:27:46.178102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.785 [2024-07-11 15:27:46.178351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:32.785 [2024-07-11 15:27:46.178502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.743 ms 00:19:32.785 [2024-07-11 15:27:46.178552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.785 [2024-07-11 15:27:46.178847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.785 [2024-07-11 15:27:46.178987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:32.785 [2024-07-11 15:27:46.179142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:32.785 [2024-07-11 15:27:46.179201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.785 [2024-07-11 15:27:46.215246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.785 [2024-07-11 15:27:46.215453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:32.785 [2024-07-11 15:27:46.215568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.918 ms 00:19:32.785 [2024-07-11 15:27:46.215618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.786 [2024-07-11 15:27:46.215774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.786 [2024-07-11 15:27:46.215859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:32.786 [2024-07-11 15:27:46.215944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:32.786 [2024-07-11 15:27:46.215983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.786 [2024-07-11 15:27:46.216343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.786 [2024-07-11 15:27:46.216407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:32.786 [2024-07-11 15:27:46.216575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:19:32.786 [2024-07-11 15:27:46.216629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.786 [2024-07-11 15:27:46.216816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.786 [2024-07-11 15:27:46.216872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:32.786 [2024-07-11 15:27:46.217001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:19:32.786 [2024-07-11 15:27:46.217146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.786 [2024-07-11 15:27:46.233153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.786 [2024-07-11 15:27:46.233457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:32.786 [2024-07-11 15:27:46.233572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.926 ms 00:19:32.786 [2024-07-11 15:27:46.233621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.786 [2024-07-11 15:27:46.249240] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:32.786 [2024-07-11 15:27:46.249446] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:32.786 [2024-07-11 15:27:46.249605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.786 [2024-07-11 15:27:46.249649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:32.786 [2024-07-11 15:27:46.249689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.761 ms 00:19:32.786 [2024-07-11 15:27:46.249790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.786 [2024-07-11 15:27:46.279299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.786 [2024-07-11 15:27:46.279577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:32.786 [2024-07-11 15:27:46.279608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.360 ms 00:19:32.786 [2024-07-11 15:27:46.279622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.786 [2024-07-11 15:27:46.295890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.786 [2024-07-11 15:27:46.295937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:32.786 [2024-07-11 15:27:46.295971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.144 ms 00:19:32.786 [2024-07-11 15:27:46.295981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.786 [2024-07-11 15:27:46.311005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.786 [2024-07-11 15:27:46.311066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:32.786 [2024-07-11 15:27:46.311099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.874 ms 00:19:32.786 [2024-07-11 15:27:46.311109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.786 [2024-07-11 15:27:46.311999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.786 [2024-07-11 15:27:46.312046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:32.786 [2024-07-11 15:27:46.312062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:19:32.786 [2024-07-11 15:27:46.312072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.786 [2024-07-11 15:27:46.376524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.786 [2024-07-11 15:27:46.376625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:32.786 [2024-07-11 15:27:46.376662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.417 ms 00:19:32.786 [2024-07-11 15:27:46.376673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.786 [2024-07-11 15:27:46.388357] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:33.045 [2024-07-11 15:27:46.401627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.045 [2024-07-11 15:27:46.401690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:33.045 [2024-07-11 15:27:46.401725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.787 ms 00:19:33.045 [2024-07-11 15:27:46.401736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.045 [2024-07-11 15:27:46.401865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.045 [2024-07-11 15:27:46.401900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:33.045 [2024-07-11 15:27:46.401916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:33.045 [2024-07-11 15:27:46.401927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.045 [2024-07-11 15:27:46.402034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.045 [2024-07-11 15:27:46.402075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:33.045 [2024-07-11 15:27:46.402088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:33.045 [2024-07-11 15:27:46.402099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.045 [2024-07-11 15:27:46.402136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.045 [2024-07-11 15:27:46.402152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:33.045 [2024-07-11 15:27:46.402165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:33.045 [2024-07-11 15:27:46.402181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.045 [2024-07-11 15:27:46.402220] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:33.045 [2024-07-11 15:27:46.402236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.045 [2024-07-11 15:27:46.402249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:33.045 [2024-07-11 15:27:46.402261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:33.045 [2024-07-11 15:27:46.402271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.045 [2024-07-11 15:27:46.430820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.045 [2024-07-11 15:27:46.430889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:33.045 [2024-07-11 15:27:46.430936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.516 ms 00:19:33.045 [2024-07-11 15:27:46.430947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.045 [2024-07-11 15:27:46.431181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.045 [2024-07-11 15:27:46.431202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:33.045 [2024-07-11 15:27:46.431215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:33.045 [2024-07-11 15:27:46.431227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.045 [2024-07-11 15:27:46.432360] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:33.045 [2024-07-11 15:27:46.436278] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 331.641 ms, result 0 00:19:33.045 [2024-07-11 15:27:46.437218] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:33.045 [2024-07-11 15:27:46.453120] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:43.481  Copying: 26/256 [MB] (26 MBps) Copying: 50/256 [MB] (23 MBps) Copying: 73/256 [MB] (23 MBps) Copying: 96/256 [MB] (22 MBps) Copying: 120/256 [MB] (24 MBps) Copying: 144/256 [MB] (23 MBps) Copying: 168/256 [MB] (24 MBps) Copying: 194/256 [MB] (25 MBps) Copying: 219/256 [MB] (25 MBps) Copying: 244/256 [MB] (24 MBps) Copying: 256/256 [MB] (average 24 MBps)[2024-07-11 15:27:56.941523] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:43.481 [2024-07-11 15:27:56.952887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.481 [2024-07-11 15:27:56.952929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:43.481 [2024-07-11 15:27:56.952948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:43.481 [2024-07-11 15:27:56.952958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.481 [2024-07-11 15:27:56.952986] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:43.481 [2024-07-11 15:27:56.956200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.481 [2024-07-11 15:27:56.956236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:43.481 [2024-07-11 15:27:56.956250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.195 ms 00:19:43.481 [2024-07-11 15:27:56.956260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.481 [2024-07-11 15:27:56.956518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.481 [2024-07-11 15:27:56.956534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:43.481 [2024-07-11 15:27:56.956545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:19:43.481 [2024-07-11 15:27:56.956556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.481 [2024-07-11 15:27:56.960207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.481 [2024-07-11 15:27:56.960234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:43.481 [2024-07-11 15:27:56.960246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.632 ms 00:19:43.481 [2024-07-11 15:27:56.960262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.481 [2024-07-11 15:27:56.967160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.481 [2024-07-11 15:27:56.967189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:43.481 [2024-07-11 15:27:56.967201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.877 ms 00:19:43.481 [2024-07-11 15:27:56.967210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.481 [2024-07-11 15:27:56.995997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.481 [2024-07-11 15:27:56.996045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:43.481 [2024-07-11 15:27:56.996078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.723 ms 00:19:43.481 [2024-07-11 15:27:56.996089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.481 [2024-07-11 15:27:57.013026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.481 [2024-07-11 15:27:57.013074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:43.481 [2024-07-11 15:27:57.013107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.892 ms 00:19:43.481 [2024-07-11 15:27:57.013118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.481 [2024-07-11 15:27:57.013275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.481 [2024-07-11 15:27:57.013293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:43.481 [2024-07-11 15:27:57.013305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:19:43.481 [2024-07-11 15:27:57.013316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.481 [2024-07-11 15:27:57.042718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.481 [2024-07-11 15:27:57.042756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:43.482 [2024-07-11 15:27:57.042771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.382 ms 00:19:43.482 [2024-07-11 15:27:57.042781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.482 [2024-07-11 15:27:57.071379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.482 [2024-07-11 15:27:57.071417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:43.482 [2024-07-11 15:27:57.071431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.545 ms 00:19:43.482 [2024-07-11 15:27:57.071441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.742 [2024-07-11 15:27:57.100643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.742 [2024-07-11 15:27:57.100681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:43.742 [2024-07-11 15:27:57.100696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.127 ms 00:19:43.742 [2024-07-11 15:27:57.100706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.742 [2024-07-11 15:27:57.129184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.742 [2024-07-11 15:27:57.129220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:43.742 [2024-07-11 15:27:57.129234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.406 ms 00:19:43.742 [2024-07-11 15:27:57.129243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.742 [2024-07-11 15:27:57.129285] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:43.742 [2024-07-11 15:27:57.129307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:43.742 [2024-07-11 15:27:57.129771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.129781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.129791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.129801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.129812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.129837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.129847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.129856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.129866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.129876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.129886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.129896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.129906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.129916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.129926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.129936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.129946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.129956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.129966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.129975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.129985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:43.743 [2024-07-11 15:27:57.130527] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:43.743 [2024-07-11 15:27:57.130538] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f0bc7114-fa44-4b7b-b47f-5e2eeb48ec58 00:19:43.743 [2024-07-11 15:27:57.130549] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:43.743 [2024-07-11 15:27:57.130559] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:43.743 [2024-07-11 15:27:57.130598] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:43.743 [2024-07-11 15:27:57.130609] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:43.743 [2024-07-11 15:27:57.130620] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:43.743 [2024-07-11 15:27:57.130631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:43.743 [2024-07-11 15:27:57.130641] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:43.743 [2024-07-11 15:27:57.130651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:43.743 [2024-07-11 15:27:57.130661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:43.743 [2024-07-11 15:27:57.130672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.743 [2024-07-11 15:27:57.130683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:43.743 [2024-07-11 15:27:57.130695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.388 ms 00:19:43.743 [2024-07-11 15:27:57.130711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.743 [2024-07-11 15:27:57.147061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.743 [2024-07-11 15:27:57.147139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:43.743 [2024-07-11 15:27:57.147158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.324 ms 00:19:43.743 [2024-07-11 15:27:57.147168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.743 [2024-07-11 15:27:57.147673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.743 [2024-07-11 15:27:57.147705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:43.743 [2024-07-11 15:27:57.147726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:19:43.743 [2024-07-11 15:27:57.147738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.743 [2024-07-11 15:27:57.188237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.743 [2024-07-11 15:27:57.188284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:43.743 [2024-07-11 15:27:57.188301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.743 [2024-07-11 15:27:57.188311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.743 [2024-07-11 15:27:57.188409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.743 [2024-07-11 15:27:57.188425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:43.743 [2024-07-11 15:27:57.188443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.743 [2024-07-11 15:27:57.188469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.743 [2024-07-11 15:27:57.188529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.743 [2024-07-11 15:27:57.188547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:43.743 [2024-07-11 15:27:57.188574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.743 [2024-07-11 15:27:57.188585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.743 [2024-07-11 15:27:57.188608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.743 [2024-07-11 15:27:57.188621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:43.743 [2024-07-11 15:27:57.188632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.743 [2024-07-11 15:27:57.188648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.743 [2024-07-11 15:27:57.281684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.743 [2024-07-11 15:27:57.281745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:43.743 [2024-07-11 15:27:57.281763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.743 [2024-07-11 15:27:57.281774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.001 [2024-07-11 15:27:57.363380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.001 [2024-07-11 15:27:57.363440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:44.001 [2024-07-11 15:27:57.363490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.001 [2024-07-11 15:27:57.363508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.001 [2024-07-11 15:27:57.363591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.001 [2024-07-11 15:27:57.363608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:44.001 [2024-07-11 15:27:57.363620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.001 [2024-07-11 15:27:57.363631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.001 [2024-07-11 15:27:57.363665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.001 [2024-07-11 15:27:57.363679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:44.001 [2024-07-11 15:27:57.363690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.001 [2024-07-11 15:27:57.363702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.001 [2024-07-11 15:27:57.363889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.001 [2024-07-11 15:27:57.363907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:44.001 [2024-07-11 15:27:57.363918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.001 [2024-07-11 15:27:57.363928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.001 [2024-07-11 15:27:57.363973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.001 [2024-07-11 15:27:57.363989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:44.001 [2024-07-11 15:27:57.364000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.001 [2024-07-11 15:27:57.364010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.001 [2024-07-11 15:27:57.364058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.001 [2024-07-11 15:27:57.364073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:44.001 [2024-07-11 15:27:57.364084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.001 [2024-07-11 15:27:57.364093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.001 [2024-07-11 15:27:57.364394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:44.001 [2024-07-11 15:27:57.364459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:44.001 [2024-07-11 15:27:57.364511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:44.001 [2024-07-11 15:27:57.364549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.001 [2024-07-11 15:27:57.364905] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 412.011 ms, result 0 00:19:44.935 00:19:44.935 00:19:44.935 15:27:58 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:44.935 15:27:58 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:45.502 15:27:58 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:45.502 [2024-07-11 15:27:58.992208] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:45.502 [2024-07-11 15:27:58.992339] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80545 ] 00:19:45.760 [2024-07-11 15:27:59.153370] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.760 [2024-07-11 15:27:59.342900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.328 [2024-07-11 15:27:59.643497] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:46.328 [2024-07-11 15:27:59.643574] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:46.328 [2024-07-11 15:27:59.804871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.328 [2024-07-11 15:27:59.804928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:46.328 [2024-07-11 15:27:59.804947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:46.328 [2024-07-11 15:27:59.804958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.328 [2024-07-11 15:27:59.808257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.328 [2024-07-11 15:27:59.808301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:46.328 [2024-07-11 15:27:59.808334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.255 ms 00:19:46.328 [2024-07-11 15:27:59.808347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.328 [2024-07-11 15:27:59.808535] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:46.328 [2024-07-11 15:27:59.809593] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:46.328 [2024-07-11 15:27:59.809636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.328 [2024-07-11 15:27:59.809651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:46.328 [2024-07-11 15:27:59.809664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:19:46.328 [2024-07-11 15:27:59.809675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.328 [2024-07-11 15:27:59.810979] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:46.328 [2024-07-11 15:27:59.826209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.328 [2024-07-11 15:27:59.826250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:46.328 [2024-07-11 15:27:59.826272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.230 ms 00:19:46.328 [2024-07-11 15:27:59.826284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.328 [2024-07-11 15:27:59.826434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.328 [2024-07-11 15:27:59.826465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:46.328 [2024-07-11 15:27:59.826477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:46.328 [2024-07-11 15:27:59.826489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.328 [2024-07-11 15:27:59.830570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.328 [2024-07-11 15:27:59.830611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:46.328 [2024-07-11 15:27:59.830626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.029 ms 00:19:46.328 [2024-07-11 15:27:59.830637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.328 [2024-07-11 15:27:59.830742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.328 [2024-07-11 15:27:59.830760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:46.328 [2024-07-11 15:27:59.830772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:46.328 [2024-07-11 15:27:59.830782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.328 [2024-07-11 15:27:59.830821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.328 [2024-07-11 15:27:59.830835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:46.328 [2024-07-11 15:27:59.830846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:46.328 [2024-07-11 15:27:59.830859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.328 [2024-07-11 15:27:59.830886] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:46.328 [2024-07-11 15:27:59.834944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.328 [2024-07-11 15:27:59.834978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:46.328 [2024-07-11 15:27:59.834992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.065 ms 00:19:46.328 [2024-07-11 15:27:59.835002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.328 [2024-07-11 15:27:59.835071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.328 [2024-07-11 15:27:59.835089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:46.328 [2024-07-11 15:27:59.835101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:46.328 [2024-07-11 15:27:59.835110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.328 [2024-07-11 15:27:59.835134] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:46.328 [2024-07-11 15:27:59.835159] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:46.328 [2024-07-11 15:27:59.835199] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:46.328 [2024-07-11 15:27:59.835218] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:46.328 [2024-07-11 15:27:59.835307] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:46.329 [2024-07-11 15:27:59.835321] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:46.329 [2024-07-11 15:27:59.835333] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:46.329 [2024-07-11 15:27:59.835346] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:46.329 [2024-07-11 15:27:59.835358] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:46.329 [2024-07-11 15:27:59.835368] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:46.329 [2024-07-11 15:27:59.835381] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:46.329 [2024-07-11 15:27:59.835390] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:46.329 [2024-07-11 15:27:59.835399] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:46.329 [2024-07-11 15:27:59.835409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.329 [2024-07-11 15:27:59.835419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:46.329 [2024-07-11 15:27:59.835429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:19:46.329 [2024-07-11 15:27:59.835439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.329 [2024-07-11 15:27:59.835541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.329 [2024-07-11 15:27:59.835555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:46.329 [2024-07-11 15:27:59.835566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:46.329 [2024-07-11 15:27:59.835581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.329 [2024-07-11 15:27:59.835677] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:46.329 [2024-07-11 15:27:59.835692] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:46.329 [2024-07-11 15:27:59.835703] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.329 [2024-07-11 15:27:59.835713] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.329 [2024-07-11 15:27:59.835724] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:46.329 [2024-07-11 15:27:59.835733] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:46.329 [2024-07-11 15:27:59.835742] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:46.329 [2024-07-11 15:27:59.835753] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:46.329 [2024-07-11 15:27:59.835762] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:46.329 [2024-07-11 15:27:59.835771] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.329 [2024-07-11 15:27:59.835780] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:46.329 [2024-07-11 15:27:59.835789] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:46.329 [2024-07-11 15:27:59.835799] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.329 [2024-07-11 15:27:59.835808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:46.329 [2024-07-11 15:27:59.835832] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:46.329 [2024-07-11 15:27:59.835841] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.329 [2024-07-11 15:27:59.835849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:46.329 [2024-07-11 15:27:59.835858] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:46.329 [2024-07-11 15:27:59.835879] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.329 [2024-07-11 15:27:59.835889] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:46.329 [2024-07-11 15:27:59.835898] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:46.329 [2024-07-11 15:27:59.835908] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.329 [2024-07-11 15:27:59.835916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:46.329 [2024-07-11 15:27:59.835925] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:46.329 [2024-07-11 15:27:59.835934] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.329 [2024-07-11 15:27:59.835943] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:46.329 [2024-07-11 15:27:59.835951] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:46.329 [2024-07-11 15:27:59.835960] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.329 [2024-07-11 15:27:59.835969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:46.329 [2024-07-11 15:27:59.835978] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:46.329 [2024-07-11 15:27:59.835986] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.329 [2024-07-11 15:27:59.835995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:46.329 [2024-07-11 15:27:59.836004] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:46.329 [2024-07-11 15:27:59.836012] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.329 [2024-07-11 15:27:59.836021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:46.329 [2024-07-11 15:27:59.836030] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:46.329 [2024-07-11 15:27:59.836039] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.329 [2024-07-11 15:27:59.836371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:46.329 [2024-07-11 15:27:59.836429] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:46.329 [2024-07-11 15:27:59.836486] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.329 [2024-07-11 15:27:59.836525] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:46.329 [2024-07-11 15:27:59.836682] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:46.329 [2024-07-11 15:27:59.836746] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.329 [2024-07-11 15:27:59.836794] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:46.329 [2024-07-11 15:27:59.836956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:46.329 [2024-07-11 15:27:59.837007] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.329 [2024-07-11 15:27:59.837079] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.329 [2024-07-11 15:27:59.837117] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:46.329 [2024-07-11 15:27:59.837225] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:46.329 [2024-07-11 15:27:59.837272] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:46.329 [2024-07-11 15:27:59.837310] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:46.329 [2024-07-11 15:27:59.837345] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:46.329 [2024-07-11 15:27:59.837437] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:46.329 [2024-07-11 15:27:59.837490] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:46.329 [2024-07-11 15:27:59.837596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.329 [2024-07-11 15:27:59.837615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:46.329 [2024-07-11 15:27:59.837626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:46.329 [2024-07-11 15:27:59.837637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:46.329 [2024-07-11 15:27:59.837647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:46.329 [2024-07-11 15:27:59.837658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:46.329 [2024-07-11 15:27:59.837668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:46.329 [2024-07-11 15:27:59.837679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:46.329 [2024-07-11 15:27:59.837690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:46.329 [2024-07-11 15:27:59.837701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:46.329 [2024-07-11 15:27:59.837711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:46.329 [2024-07-11 15:27:59.837722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:46.329 [2024-07-11 15:27:59.837732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:46.329 [2024-07-11 15:27:59.837743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:46.329 [2024-07-11 15:27:59.837754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:46.329 [2024-07-11 15:27:59.837765] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:46.329 [2024-07-11 15:27:59.837777] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.329 [2024-07-11 15:27:59.837788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:46.329 [2024-07-11 15:27:59.837800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:46.329 [2024-07-11 15:27:59.837825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:46.329 [2024-07-11 15:27:59.837836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:46.329 [2024-07-11 15:27:59.837848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.329 [2024-07-11 15:27:59.837875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:46.329 [2024-07-11 15:27:59.837888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.229 ms 00:19:46.329 [2024-07-11 15:27:59.837898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.329 [2024-07-11 15:27:59.872559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.329 [2024-07-11 15:27:59.872815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:46.329 [2024-07-11 15:27:59.872969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.561 ms 00:19:46.329 [2024-07-11 15:27:59.873031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.329 [2024-07-11 15:27:59.873263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.329 [2024-07-11 15:27:59.873329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:46.329 [2024-07-11 15:27:59.873443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:46.329 [2024-07-11 15:27:59.873611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.329 [2024-07-11 15:27:59.908706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.329 [2024-07-11 15:27:59.908986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:46.329 [2024-07-11 15:27:59.909136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.015 ms 00:19:46.330 [2024-07-11 15:27:59.909189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.330 [2024-07-11 15:27:59.909339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.330 [2024-07-11 15:27:59.909529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:46.330 [2024-07-11 15:27:59.909585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:46.330 [2024-07-11 15:27:59.909625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.330 [2024-07-11 15:27:59.910153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.330 [2024-07-11 15:27:59.910294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:46.330 [2024-07-11 15:27:59.910437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:19:46.330 [2024-07-11 15:27:59.910560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.330 [2024-07-11 15:27:59.910766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.330 [2024-07-11 15:27:59.910843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:46.330 [2024-07-11 15:27:59.910947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:19:46.330 [2024-07-11 15:27:59.910998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.330 [2024-07-11 15:27:59.927177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.330 [2024-07-11 15:27:59.927375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:46.330 [2024-07-11 15:27:59.927519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.055 ms 00:19:46.330 [2024-07-11 15:27:59.927574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.588 [2024-07-11 15:27:59.944614] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:46.588 [2024-07-11 15:27:59.944808] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:46.588 [2024-07-11 15:27:59.945013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.588 [2024-07-11 15:27:59.945149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:46.588 [2024-07-11 15:27:59.945204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.239 ms 00:19:46.588 [2024-07-11 15:27:59.945303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.588 [2024-07-11 15:27:59.974864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.588 [2024-07-11 15:27:59.975072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:46.588 [2024-07-11 15:27:59.975227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.306 ms 00:19:46.589 [2024-07-11 15:27:59.975282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.589 [2024-07-11 15:27:59.990768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.589 [2024-07-11 15:27:59.990980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:46.589 [2024-07-11 15:27:59.991149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.240 ms 00:19:46.589 [2024-07-11 15:27:59.991205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.589 [2024-07-11 15:28:00.006895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.589 [2024-07-11 15:28:00.007092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:46.589 [2024-07-11 15:28:00.007121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.492 ms 00:19:46.589 [2024-07-11 15:28:00.007134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.589 [2024-07-11 15:28:00.008000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.589 [2024-07-11 15:28:00.008050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:46.589 [2024-07-11 15:28:00.008068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.727 ms 00:19:46.589 [2024-07-11 15:28:00.008080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.589 [2024-07-11 15:28:00.079863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.589 [2024-07-11 15:28:00.079940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:46.589 [2024-07-11 15:28:00.079976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.745 ms 00:19:46.589 [2024-07-11 15:28:00.079988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.589 [2024-07-11 15:28:00.093344] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:46.589 [2024-07-11 15:28:00.106663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.589 [2024-07-11 15:28:00.106727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:46.589 [2024-07-11 15:28:00.106763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.483 ms 00:19:46.589 [2024-07-11 15:28:00.106774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.589 [2024-07-11 15:28:00.106907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.589 [2024-07-11 15:28:00.106930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:46.589 [2024-07-11 15:28:00.106942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:46.589 [2024-07-11 15:28:00.106953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.589 [2024-07-11 15:28:00.107017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.589 [2024-07-11 15:28:00.107056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:46.589 [2024-07-11 15:28:00.107088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:46.589 [2024-07-11 15:28:00.107099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.589 [2024-07-11 15:28:00.107133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.589 [2024-07-11 15:28:00.107147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:46.589 [2024-07-11 15:28:00.107165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:46.589 [2024-07-11 15:28:00.107176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.589 [2024-07-11 15:28:00.107212] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:46.589 [2024-07-11 15:28:00.107227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.589 [2024-07-11 15:28:00.107237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:46.589 [2024-07-11 15:28:00.107248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:46.589 [2024-07-11 15:28:00.107259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.589 [2024-07-11 15:28:00.136593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.589 [2024-07-11 15:28:00.136642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:46.589 [2024-07-11 15:28:00.136677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.305 ms 00:19:46.589 [2024-07-11 15:28:00.136689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.589 [2024-07-11 15:28:00.136809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.589 [2024-07-11 15:28:00.136843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:46.589 [2024-07-11 15:28:00.136856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:46.589 [2024-07-11 15:28:00.136866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.589 [2024-07-11 15:28:00.137873] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:46.589 [2024-07-11 15:28:00.142266] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 332.658 ms, result 0 00:19:46.589 [2024-07-11 15:28:00.143220] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:46.589 [2024-07-11 15:28:00.159445] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:46.848  Copying: 4096/4096 [kB] (average 23 MBps)[2024-07-11 15:28:00.332315] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:46.848 [2024-07-11 15:28:00.343539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.848 [2024-07-11 15:28:00.343580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:46.848 [2024-07-11 15:28:00.343614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:46.848 [2024-07-11 15:28:00.343626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.848 [2024-07-11 15:28:00.343662] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:46.848 [2024-07-11 15:28:00.346968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.848 [2024-07-11 15:28:00.346999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:46.848 [2024-07-11 15:28:00.347028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.286 ms 00:19:46.848 [2024-07-11 15:28:00.347072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.848 [2024-07-11 15:28:00.348703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.848 [2024-07-11 15:28:00.348742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:46.848 [2024-07-11 15:28:00.348774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.598 ms 00:19:46.848 [2024-07-11 15:28:00.348784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.848 [2024-07-11 15:28:00.352670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.848 [2024-07-11 15:28:00.352707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:46.848 [2024-07-11 15:28:00.352729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.863 ms 00:19:46.848 [2024-07-11 15:28:00.352740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.848 [2024-07-11 15:28:00.360047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.848 [2024-07-11 15:28:00.360241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:46.848 [2024-07-11 15:28:00.360402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.267 ms 00:19:46.848 [2024-07-11 15:28:00.360425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.848 [2024-07-11 15:28:00.390556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.848 [2024-07-11 15:28:00.390595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:46.848 [2024-07-11 15:28:00.390627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.049 ms 00:19:46.848 [2024-07-11 15:28:00.390638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.848 [2024-07-11 15:28:00.408377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.848 [2024-07-11 15:28:00.408417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:46.848 [2024-07-11 15:28:00.408450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.676 ms 00:19:46.848 [2024-07-11 15:28:00.408486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.848 [2024-07-11 15:28:00.408656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.848 [2024-07-11 15:28:00.408677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:46.848 [2024-07-11 15:28:00.408690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:19:46.848 [2024-07-11 15:28:00.408702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.848 [2024-07-11 15:28:00.441177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.848 [2024-07-11 15:28:00.441216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:46.848 [2024-07-11 15:28:00.441247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.452 ms 00:19:46.848 [2024-07-11 15:28:00.441258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.108 [2024-07-11 15:28:00.471557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.108 [2024-07-11 15:28:00.471595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:47.108 [2024-07-11 15:28:00.471627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.237 ms 00:19:47.108 [2024-07-11 15:28:00.471637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.108 [2024-07-11 15:28:00.500262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.108 [2024-07-11 15:28:00.500298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:47.109 [2024-07-11 15:28:00.500329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.564 ms 00:19:47.109 [2024-07-11 15:28:00.500340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.109 [2024-07-11 15:28:00.529610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.109 [2024-07-11 15:28:00.529648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:47.109 [2024-07-11 15:28:00.529680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.181 ms 00:19:47.109 [2024-07-11 15:28:00.529690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.109 [2024-07-11 15:28:00.529751] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:47.109 [2024-07-11 15:28:00.529781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.529794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.529805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.529830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.529841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.529851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.529861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.529871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.529881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.529891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.529902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.529912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.529922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.529932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.529942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.529952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.529962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.529973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.529983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.529992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:47.109 [2024-07-11 15:28:00.530919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:47.110 [2024-07-11 15:28:00.530930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:47.110 [2024-07-11 15:28:00.530941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:47.110 [2024-07-11 15:28:00.530953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:47.110 [2024-07-11 15:28:00.530964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:47.110 [2024-07-11 15:28:00.530975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:47.110 [2024-07-11 15:28:00.530986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:47.110 [2024-07-11 15:28:00.530998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:47.110 [2024-07-11 15:28:00.531009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:47.110 [2024-07-11 15:28:00.531021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:47.110 [2024-07-11 15:28:00.531033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:47.110 [2024-07-11 15:28:00.531053] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:47.110 [2024-07-11 15:28:00.531064] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f0bc7114-fa44-4b7b-b47f-5e2eeb48ec58 00:19:47.110 [2024-07-11 15:28:00.531086] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:47.110 [2024-07-11 15:28:00.531097] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:47.110 [2024-07-11 15:28:00.531120] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:47.110 [2024-07-11 15:28:00.531132] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:47.110 [2024-07-11 15:28:00.531143] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:47.110 [2024-07-11 15:28:00.531154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:47.110 [2024-07-11 15:28:00.531164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:47.110 [2024-07-11 15:28:00.531174] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:47.110 [2024-07-11 15:28:00.531184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:47.110 [2024-07-11 15:28:00.531195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.110 [2024-07-11 15:28:00.531206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:47.110 [2024-07-11 15:28:00.531223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.445 ms 00:19:47.110 [2024-07-11 15:28:00.531235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.110 [2024-07-11 15:28:00.546662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.110 [2024-07-11 15:28:00.546698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:47.110 [2024-07-11 15:28:00.546731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.400 ms 00:19:47.110 [2024-07-11 15:28:00.546742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.110 [2024-07-11 15:28:00.547239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.110 [2024-07-11 15:28:00.547275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:47.110 [2024-07-11 15:28:00.547289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:19:47.110 [2024-07-11 15:28:00.547300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.110 [2024-07-11 15:28:00.588062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.110 [2024-07-11 15:28:00.588116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:47.110 [2024-07-11 15:28:00.588148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.110 [2024-07-11 15:28:00.588159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.110 [2024-07-11 15:28:00.588274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.110 [2024-07-11 15:28:00.588299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:47.110 [2024-07-11 15:28:00.588311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.110 [2024-07-11 15:28:00.588322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.110 [2024-07-11 15:28:00.588380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.110 [2024-07-11 15:28:00.588413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:47.110 [2024-07-11 15:28:00.588425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.110 [2024-07-11 15:28:00.588437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.110 [2024-07-11 15:28:00.588462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.110 [2024-07-11 15:28:00.588476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:47.110 [2024-07-11 15:28:00.588493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.110 [2024-07-11 15:28:00.588505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.110 [2024-07-11 15:28:00.679000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.110 [2024-07-11 15:28:00.679086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:47.110 [2024-07-11 15:28:00.679122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.110 [2024-07-11 15:28:00.679133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.369 [2024-07-11 15:28:00.757319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.369 [2024-07-11 15:28:00.757384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:47.369 [2024-07-11 15:28:00.757418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.369 [2024-07-11 15:28:00.757429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.369 [2024-07-11 15:28:00.757522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.369 [2024-07-11 15:28:00.757539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:47.369 [2024-07-11 15:28:00.757550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.369 [2024-07-11 15:28:00.757561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.369 [2024-07-11 15:28:00.757595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.369 [2024-07-11 15:28:00.757608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:47.369 [2024-07-11 15:28:00.757619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.369 [2024-07-11 15:28:00.757635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.369 [2024-07-11 15:28:00.757749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.369 [2024-07-11 15:28:00.757768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:47.369 [2024-07-11 15:28:00.757780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.369 [2024-07-11 15:28:00.757791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.369 [2024-07-11 15:28:00.757852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.369 [2024-07-11 15:28:00.757883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:47.369 [2024-07-11 15:28:00.757894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.369 [2024-07-11 15:28:00.757905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.369 [2024-07-11 15:28:00.757953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.369 [2024-07-11 15:28:00.757967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:47.369 [2024-07-11 15:28:00.757977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.369 [2024-07-11 15:28:00.757988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.369 [2024-07-11 15:28:00.758121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.369 [2024-07-11 15:28:00.758142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:47.369 [2024-07-11 15:28:00.758168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.369 [2024-07-11 15:28:00.758186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.369 [2024-07-11 15:28:00.758363] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 414.799 ms, result 0 00:19:48.305 00:19:48.305 00:19:48.305 15:28:01 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=80576 00:19:48.305 15:28:01 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:48.305 15:28:01 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 80576 00:19:48.305 15:28:01 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 80576 ']' 00:19:48.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.305 15:28:01 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.305 15:28:01 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:48.305 15:28:01 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.305 15:28:01 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:48.305 15:28:01 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:48.305 [2024-07-11 15:28:01.863954] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:48.305 [2024-07-11 15:28:01.864419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80576 ] 00:19:48.565 [2024-07-11 15:28:02.025991] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.822 [2024-07-11 15:28:02.206305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.389 15:28:02 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.390 15:28:02 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:19:49.390 15:28:02 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:49.647 [2024-07-11 15:28:03.134632] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:49.647 [2024-07-11 15:28:03.134724] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:49.907 [2024-07-11 15:28:03.310590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.907 [2024-07-11 15:28:03.310644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:49.907 [2024-07-11 15:28:03.310680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:49.907 [2024-07-11 15:28:03.310693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.907 [2024-07-11 15:28:03.314006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.907 [2024-07-11 15:28:03.314077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:49.907 [2024-07-11 15:28:03.314095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.279 ms 00:19:49.907 [2024-07-11 15:28:03.314108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.907 [2024-07-11 15:28:03.314240] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:49.907 [2024-07-11 15:28:03.315218] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:49.907 [2024-07-11 15:28:03.315259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.907 [2024-07-11 15:28:03.315276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:49.907 [2024-07-11 15:28:03.315289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:19:49.907 [2024-07-11 15:28:03.315302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.907 [2024-07-11 15:28:03.316657] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:49.907 [2024-07-11 15:28:03.333027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.907 [2024-07-11 15:28:03.333097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:49.907 [2024-07-11 15:28:03.333121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.367 ms 00:19:49.907 [2024-07-11 15:28:03.333133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.907 [2024-07-11 15:28:03.333253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.907 [2024-07-11 15:28:03.333276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:49.907 [2024-07-11 15:28:03.333292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:49.907 [2024-07-11 15:28:03.333303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.907 [2024-07-11 15:28:03.337607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.907 [2024-07-11 15:28:03.337652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:49.907 [2024-07-11 15:28:03.337692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.237 ms 00:19:49.907 [2024-07-11 15:28:03.337704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.907 [2024-07-11 15:28:03.337855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.907 [2024-07-11 15:28:03.337875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:49.907 [2024-07-11 15:28:03.337889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:19:49.907 [2024-07-11 15:28:03.337901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.907 [2024-07-11 15:28:03.337946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.907 [2024-07-11 15:28:03.337961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:49.907 [2024-07-11 15:28:03.337974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:49.907 [2024-07-11 15:28:03.337985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.907 [2024-07-11 15:28:03.338068] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:49.907 [2024-07-11 15:28:03.342131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.907 [2024-07-11 15:28:03.342173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:49.907 [2024-07-11 15:28:03.342189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.096 ms 00:19:49.907 [2024-07-11 15:28:03.342203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.907 [2024-07-11 15:28:03.342271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.907 [2024-07-11 15:28:03.342296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:49.907 [2024-07-11 15:28:03.342309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:49.907 [2024-07-11 15:28:03.342325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.907 [2024-07-11 15:28:03.342369] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:49.907 [2024-07-11 15:28:03.342413] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:49.907 [2024-07-11 15:28:03.342492] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:49.907 [2024-07-11 15:28:03.342517] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:49.907 [2024-07-11 15:28:03.342617] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:49.907 [2024-07-11 15:28:03.342638] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:49.907 [2024-07-11 15:28:03.342654] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:49.907 [2024-07-11 15:28:03.342670] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:49.907 [2024-07-11 15:28:03.342683] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:49.907 [2024-07-11 15:28:03.342696] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:49.907 [2024-07-11 15:28:03.342707] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:49.907 [2024-07-11 15:28:03.342719] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:49.907 [2024-07-11 15:28:03.342730] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:49.907 [2024-07-11 15:28:03.342745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.907 [2024-07-11 15:28:03.342756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:49.907 [2024-07-11 15:28:03.342784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:19:49.907 [2024-07-11 15:28:03.342794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.907 [2024-07-11 15:28:03.342885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.907 [2024-07-11 15:28:03.342899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:49.907 [2024-07-11 15:28:03.342911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:49.907 [2024-07-11 15:28:03.342922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.907 [2024-07-11 15:28:03.343032] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:49.907 [2024-07-11 15:28:03.343051] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:49.907 [2024-07-11 15:28:03.343064] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:49.907 [2024-07-11 15:28:03.343074] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.907 [2024-07-11 15:28:03.343087] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:49.907 [2024-07-11 15:28:03.343136] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:49.907 [2024-07-11 15:28:03.343154] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:49.907 [2024-07-11 15:28:03.343166] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:49.907 [2024-07-11 15:28:03.343181] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:49.907 [2024-07-11 15:28:03.343191] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:49.907 [2024-07-11 15:28:03.343203] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:49.907 [2024-07-11 15:28:03.343213] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:49.907 [2024-07-11 15:28:03.343241] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:49.907 [2024-07-11 15:28:03.343252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:49.907 [2024-07-11 15:28:03.343263] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:49.907 [2024-07-11 15:28:03.343273] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.907 [2024-07-11 15:28:03.343285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:49.907 [2024-07-11 15:28:03.343296] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:49.907 [2024-07-11 15:28:03.343314] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.907 [2024-07-11 15:28:03.343327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:49.907 [2024-07-11 15:28:03.343340] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:49.907 [2024-07-11 15:28:03.343350] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.907 [2024-07-11 15:28:03.343361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:49.907 [2024-07-11 15:28:03.343371] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:49.907 [2024-07-11 15:28:03.343385] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.907 [2024-07-11 15:28:03.343394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:49.907 [2024-07-11 15:28:03.343406] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:49.907 [2024-07-11 15:28:03.343427] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.907 [2024-07-11 15:28:03.343440] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:49.907 [2024-07-11 15:28:03.343450] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:49.907 [2024-07-11 15:28:03.343480] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.907 [2024-07-11 15:28:03.343491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:49.907 [2024-07-11 15:28:03.343503] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:49.908 [2024-07-11 15:28:03.343513] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:49.908 [2024-07-11 15:28:03.343525] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:49.908 [2024-07-11 15:28:03.343535] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:49.908 [2024-07-11 15:28:03.343547] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:49.908 [2024-07-11 15:28:03.343558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:49.908 [2024-07-11 15:28:03.343570] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:49.908 [2024-07-11 15:28:03.343580] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.908 [2024-07-11 15:28:03.343594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:49.908 [2024-07-11 15:28:03.343605] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:49.908 [2024-07-11 15:28:03.343617] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.908 [2024-07-11 15:28:03.343627] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:49.908 [2024-07-11 15:28:03.343643] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:49.908 [2024-07-11 15:28:03.343654] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:49.908 [2024-07-11 15:28:03.343668] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.908 [2024-07-11 15:28:03.343680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:49.908 [2024-07-11 15:28:03.343692] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:49.908 [2024-07-11 15:28:03.343703] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:49.908 [2024-07-11 15:28:03.343715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:49.908 [2024-07-11 15:28:03.343726] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:49.908 [2024-07-11 15:28:03.343739] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:49.908 [2024-07-11 15:28:03.343751] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:49.908 [2024-07-11 15:28:03.343776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:49.908 [2024-07-11 15:28:03.343789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:49.908 [2024-07-11 15:28:03.343805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:49.908 [2024-07-11 15:28:03.343817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:49.908 [2024-07-11 15:28:03.343830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:49.908 [2024-07-11 15:28:03.343842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:49.908 [2024-07-11 15:28:03.343855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:49.908 [2024-07-11 15:28:03.343867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:49.908 [2024-07-11 15:28:03.343880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:49.908 [2024-07-11 15:28:03.343891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:49.908 [2024-07-11 15:28:03.343904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:49.908 [2024-07-11 15:28:03.343916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:49.908 [2024-07-11 15:28:03.343930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:49.908 [2024-07-11 15:28:03.343941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:49.908 [2024-07-11 15:28:03.343955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:49.908 [2024-07-11 15:28:03.343967] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:49.908 [2024-07-11 15:28:03.343981] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:49.908 [2024-07-11 15:28:03.343993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:49.908 [2024-07-11 15:28:03.344009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:49.908 [2024-07-11 15:28:03.344021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:49.908 [2024-07-11 15:28:03.344046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:49.908 [2024-07-11 15:28:03.344062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.908 [2024-07-11 15:28:03.344076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:49.908 [2024-07-11 15:28:03.344088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.092 ms 00:19:49.908 [2024-07-11 15:28:03.344101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.908 [2024-07-11 15:28:03.375943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.908 [2024-07-11 15:28:03.376001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:49.908 [2024-07-11 15:28:03.376052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.738 ms 00:19:49.908 [2024-07-11 15:28:03.376089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.908 [2024-07-11 15:28:03.376277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.908 [2024-07-11 15:28:03.376302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:49.908 [2024-07-11 15:28:03.376316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:49.908 [2024-07-11 15:28:03.376328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.908 [2024-07-11 15:28:03.412971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.908 [2024-07-11 15:28:03.413073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:49.908 [2024-07-11 15:28:03.413111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.614 ms 00:19:49.908 [2024-07-11 15:28:03.413125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.908 [2024-07-11 15:28:03.413252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.908 [2024-07-11 15:28:03.413275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:49.908 [2024-07-11 15:28:03.413289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:49.908 [2024-07-11 15:28:03.413302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.908 [2024-07-11 15:28:03.413614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.908 [2024-07-11 15:28:03.413643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:49.908 [2024-07-11 15:28:03.413662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:19:49.908 [2024-07-11 15:28:03.413675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.908 [2024-07-11 15:28:03.413833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.908 [2024-07-11 15:28:03.413854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:49.908 [2024-07-11 15:28:03.413867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:19:49.908 [2024-07-11 15:28:03.413880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.908 [2024-07-11 15:28:03.430752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.908 [2024-07-11 15:28:03.430801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:49.908 [2024-07-11 15:28:03.430834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.844 ms 00:19:49.908 [2024-07-11 15:28:03.430848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.908 [2024-07-11 15:28:03.446900] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:49.908 [2024-07-11 15:28:03.446943] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:49.908 [2024-07-11 15:28:03.446978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.908 [2024-07-11 15:28:03.446991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:49.908 [2024-07-11 15:28:03.447004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.994 ms 00:19:49.908 [2024-07-11 15:28:03.447016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.908 [2024-07-11 15:28:03.476980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.908 [2024-07-11 15:28:03.477057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:49.908 [2024-07-11 15:28:03.477077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.829 ms 00:19:49.908 [2024-07-11 15:28:03.477091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.908 [2024-07-11 15:28:03.493627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.908 [2024-07-11 15:28:03.493675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:49.908 [2024-07-11 15:28:03.493705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.422 ms 00:19:49.908 [2024-07-11 15:28:03.493723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.908 [2024-07-11 15:28:03.509234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.908 [2024-07-11 15:28:03.509276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:49.908 [2024-07-11 15:28:03.509309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.390 ms 00:19:49.908 [2024-07-11 15:28:03.509321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.909 [2024-07-11 15:28:03.510276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.909 [2024-07-11 15:28:03.510316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:49.909 [2024-07-11 15:28:03.510332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:19:49.909 [2024-07-11 15:28:03.510346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-07-11 15:28:03.589759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-07-11 15:28:03.589867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:50.167 [2024-07-11 15:28:03.589888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.381 ms 00:19:50.167 [2024-07-11 15:28:03.589902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-07-11 15:28:03.601935] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:50.167 [2024-07-11 15:28:03.615819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-07-11 15:28:03.615884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:50.167 [2024-07-11 15:28:03.615956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.714 ms 00:19:50.167 [2024-07-11 15:28:03.615971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-07-11 15:28:03.616159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-07-11 15:28:03.616182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:50.167 [2024-07-11 15:28:03.616199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:50.167 [2024-07-11 15:28:03.616210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-07-11 15:28:03.616283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-07-11 15:28:03.616300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:50.167 [2024-07-11 15:28:03.616314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:50.167 [2024-07-11 15:28:03.616325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-07-11 15:28:03.616363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-07-11 15:28:03.616378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:50.167 [2024-07-11 15:28:03.616395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:50.167 [2024-07-11 15:28:03.616407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-07-11 15:28:03.616450] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:50.167 [2024-07-11 15:28:03.616466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-07-11 15:28:03.616481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:50.167 [2024-07-11 15:28:03.616495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:50.167 [2024-07-11 15:28:03.616507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-07-11 15:28:03.649279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-07-11 15:28:03.649333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:50.167 [2024-07-11 15:28:03.649352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.741 ms 00:19:50.167 [2024-07-11 15:28:03.649365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-07-11 15:28:03.649536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.167 [2024-07-11 15:28:03.649562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:50.167 [2024-07-11 15:28:03.649576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:50.167 [2024-07-11 15:28:03.649588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.167 [2024-07-11 15:28:03.650598] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:50.167 [2024-07-11 15:28:03.654749] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 339.678 ms, result 0 00:19:50.167 [2024-07-11 15:28:03.655945] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:50.167 Some configs were skipped because the RPC state that can call them passed over. 00:19:50.167 15:28:03 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:50.424 [2024-07-11 15:28:03.945831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.424 [2024-07-11 15:28:03.946100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:50.424 [2024-07-11 15:28:03.946277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.610 ms 00:19:50.424 [2024-07-11 15:28:03.946335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.424 [2024-07-11 15:28:03.946534] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.315 ms, result 0 00:19:50.424 true 00:19:50.424 15:28:03 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:50.682 [2024-07-11 15:28:04.211539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.682 [2024-07-11 15:28:04.211607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:50.682 [2024-07-11 15:28:04.211629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.329 ms 00:19:50.682 [2024-07-11 15:28:04.211643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.682 [2024-07-11 15:28:04.211695] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.482 ms, result 0 00:19:50.682 true 00:19:50.682 15:28:04 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 80576 00:19:50.682 15:28:04 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 80576 ']' 00:19:50.682 15:28:04 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 80576 00:19:50.682 15:28:04 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:19:50.682 15:28:04 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:50.682 15:28:04 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80576 00:19:50.682 killing process with pid 80576 00:19:50.682 15:28:04 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:50.682 15:28:04 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:50.682 15:28:04 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80576' 00:19:50.682 15:28:04 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 80576 00:19:50.682 15:28:04 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 80576 00:19:51.618 [2024-07-11 15:28:05.150544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.618 [2024-07-11 15:28:05.150613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:51.618 [2024-07-11 15:28:05.150652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:51.618 [2024-07-11 15:28:05.150663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.618 [2024-07-11 15:28:05.150696] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:51.618 [2024-07-11 15:28:05.153914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.618 [2024-07-11 15:28:05.153951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:51.618 [2024-07-11 15:28:05.153981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.196 ms 00:19:51.618 [2024-07-11 15:28:05.154002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.618 [2024-07-11 15:28:05.154333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.618 [2024-07-11 15:28:05.154357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:51.618 [2024-07-11 15:28:05.154370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:19:51.618 [2024-07-11 15:28:05.154397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.618 [2024-07-11 15:28:05.158349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.618 [2024-07-11 15:28:05.158398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:51.618 [2024-07-11 15:28:05.158418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.932 ms 00:19:51.618 [2024-07-11 15:28:05.158432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.618 [2024-07-11 15:28:05.165546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.619 [2024-07-11 15:28:05.165582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:51.619 [2024-07-11 15:28:05.165613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.068 ms 00:19:51.619 [2024-07-11 15:28:05.165628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.619 [2024-07-11 15:28:05.177710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.619 [2024-07-11 15:28:05.177767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:51.619 [2024-07-11 15:28:05.177783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.028 ms 00:19:51.619 [2024-07-11 15:28:05.177797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.619 [2024-07-11 15:28:05.186278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.619 [2024-07-11 15:28:05.186342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:51.619 [2024-07-11 15:28:05.186361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.440 ms 00:19:51.619 [2024-07-11 15:28:05.186373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.619 [2024-07-11 15:28:05.186539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.619 [2024-07-11 15:28:05.186561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:51.619 [2024-07-11 15:28:05.186574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:19:51.619 [2024-07-11 15:28:05.186600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.619 [2024-07-11 15:28:05.199127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.619 [2024-07-11 15:28:05.199198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:51.619 [2024-07-11 15:28:05.199215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.504 ms 00:19:51.619 [2024-07-11 15:28:05.199227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.619 [2024-07-11 15:28:05.211172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.619 [2024-07-11 15:28:05.211212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:51.619 [2024-07-11 15:28:05.211243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.904 ms 00:19:51.619 [2024-07-11 15:28:05.211260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.619 [2024-07-11 15:28:05.222948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.619 [2024-07-11 15:28:05.222987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:51.619 [2024-07-11 15:28:05.223017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.648 ms 00:19:51.619 [2024-07-11 15:28:05.223029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.879 [2024-07-11 15:28:05.235895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.879 [2024-07-11 15:28:05.235967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:51.879 [2024-07-11 15:28:05.235983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.768 ms 00:19:51.879 [2024-07-11 15:28:05.235995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.879 [2024-07-11 15:28:05.236046] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:51.879 [2024-07-11 15:28:05.236074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.236996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.237008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.237019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.237031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.237041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.237057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.237067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:51.879 [2024-07-11 15:28:05.237079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.237090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.237102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.237113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.237124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.237135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.237577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.237640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.237790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.237874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.237933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.238102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.238171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.238327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.238405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.238550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.238702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.238823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.238846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.238860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.238874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.238886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.238899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.238911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.238926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.238938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.238951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.238964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:51.880 [2024-07-11 15:28:05.238986] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:51.880 [2024-07-11 15:28:05.238998] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f0bc7114-fa44-4b7b-b47f-5e2eeb48ec58 00:19:51.880 [2024-07-11 15:28:05.239017] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:51.880 [2024-07-11 15:28:05.239027] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:51.880 [2024-07-11 15:28:05.239054] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:51.880 [2024-07-11 15:28:05.239068] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:51.880 [2024-07-11 15:28:05.239080] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:51.880 [2024-07-11 15:28:05.239091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:51.880 [2024-07-11 15:28:05.239119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:51.880 [2024-07-11 15:28:05.239129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:51.880 [2024-07-11 15:28:05.239152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:51.880 [2024-07-11 15:28:05.239164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.880 [2024-07-11 15:28:05.239177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:51.880 [2024-07-11 15:28:05.239189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.119 ms 00:19:51.880 [2024-07-11 15:28:05.239201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.880 [2024-07-11 15:28:05.255126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.880 [2024-07-11 15:28:05.255316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:51.880 [2024-07-11 15:28:05.255485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.862 ms 00:19:51.880 [2024-07-11 15:28:05.255550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.880 [2024-07-11 15:28:05.256164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.880 [2024-07-11 15:28:05.256333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:51.880 [2024-07-11 15:28:05.256471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:19:51.880 [2024-07-11 15:28:05.256541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.880 [2024-07-11 15:28:05.307480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.880 [2024-07-11 15:28:05.307749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:51.880 [2024-07-11 15:28:05.307880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.880 [2024-07-11 15:28:05.307934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.880 [2024-07-11 15:28:05.308096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.880 [2024-07-11 15:28:05.308256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:51.880 [2024-07-11 15:28:05.308312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.880 [2024-07-11 15:28:05.308360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.880 [2024-07-11 15:28:05.308475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.880 [2024-07-11 15:28:05.308551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:51.880 [2024-07-11 15:28:05.308599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.880 [2024-07-11 15:28:05.308644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.880 [2024-07-11 15:28:05.308771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.880 [2024-07-11 15:28:05.308845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:51.880 [2024-07-11 15:28:05.308886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.880 [2024-07-11 15:28:05.308986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.880 [2024-07-11 15:28:05.400200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.880 [2024-07-11 15:28:05.400539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:51.880 [2024-07-11 15:28:05.400657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.880 [2024-07-11 15:28:05.400712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.880 [2024-07-11 15:28:05.478286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.880 [2024-07-11 15:28:05.478648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:51.880 [2024-07-11 15:28:05.478777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.880 [2024-07-11 15:28:05.478848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.880 [2024-07-11 15:28:05.479066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.880 [2024-07-11 15:28:05.479197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:51.880 [2024-07-11 15:28:05.479310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.880 [2024-07-11 15:28:05.479371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.880 [2024-07-11 15:28:05.479525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.880 [2024-07-11 15:28:05.479588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:51.880 [2024-07-11 15:28:05.479634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.880 [2024-07-11 15:28:05.479676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.880 [2024-07-11 15:28:05.479852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.880 [2024-07-11 15:28:05.479918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:51.880 [2024-07-11 15:28:05.479964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.880 [2024-07-11 15:28:05.480004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.880 [2024-07-11 15:28:05.480157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.880 [2024-07-11 15:28:05.480222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:51.880 [2024-07-11 15:28:05.480270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.880 [2024-07-11 15:28:05.480385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.880 [2024-07-11 15:28:05.480508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.880 [2024-07-11 15:28:05.480643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:51.880 [2024-07-11 15:28:05.480754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.880 [2024-07-11 15:28:05.480783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.880 [2024-07-11 15:28:05.480843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.880 [2024-07-11 15:28:05.480880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:51.880 [2024-07-11 15:28:05.480892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.880 [2024-07-11 15:28:05.480905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.880 [2024-07-11 15:28:05.481107] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 330.500 ms, result 0 00:19:52.816 15:28:06 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:53.076 [2024-07-11 15:28:06.450235] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:53.076 [2024-07-11 15:28:06.450421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80634 ] 00:19:53.076 [2024-07-11 15:28:06.609812] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.336 [2024-07-11 15:28:06.783768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.595 [2024-07-11 15:28:07.075982] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:53.596 [2024-07-11 15:28:07.076099] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:53.856 [2024-07-11 15:28:07.236315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.856 [2024-07-11 15:28:07.236372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:53.856 [2024-07-11 15:28:07.236407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:53.856 [2024-07-11 15:28:07.236420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.856 [2024-07-11 15:28:07.239727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.856 [2024-07-11 15:28:07.239771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:53.856 [2024-07-11 15:28:07.239805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.264 ms 00:19:53.856 [2024-07-11 15:28:07.239816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.856 [2024-07-11 15:28:07.239977] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:53.856 [2024-07-11 15:28:07.240975] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:53.856 [2024-07-11 15:28:07.241058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.856 [2024-07-11 15:28:07.241075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:53.856 [2024-07-11 15:28:07.241087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.092 ms 00:19:53.856 [2024-07-11 15:28:07.241098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.856 [2024-07-11 15:28:07.242239] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:53.856 [2024-07-11 15:28:07.259631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.856 [2024-07-11 15:28:07.259678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:53.856 [2024-07-11 15:28:07.259703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.394 ms 00:19:53.856 [2024-07-11 15:28:07.259715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.856 [2024-07-11 15:28:07.259833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.856 [2024-07-11 15:28:07.259856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:53.856 [2024-07-11 15:28:07.259870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:53.856 [2024-07-11 15:28:07.259882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.856 [2024-07-11 15:28:07.264581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.856 [2024-07-11 15:28:07.264629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:53.856 [2024-07-11 15:28:07.264662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.625 ms 00:19:53.856 [2024-07-11 15:28:07.264679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.856 [2024-07-11 15:28:07.264803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.856 [2024-07-11 15:28:07.264839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:53.856 [2024-07-11 15:28:07.264882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:53.856 [2024-07-11 15:28:07.264893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.856 [2024-07-11 15:28:07.264932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.856 [2024-07-11 15:28:07.264948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:53.856 [2024-07-11 15:28:07.264959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:53.856 [2024-07-11 15:28:07.264973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.856 [2024-07-11 15:28:07.265003] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:53.856 [2024-07-11 15:28:07.269220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.856 [2024-07-11 15:28:07.269254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:53.856 [2024-07-11 15:28:07.269284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.225 ms 00:19:53.856 [2024-07-11 15:28:07.269295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.856 [2024-07-11 15:28:07.269357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.856 [2024-07-11 15:28:07.269374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:53.856 [2024-07-11 15:28:07.269387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:53.856 [2024-07-11 15:28:07.269397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.856 [2024-07-11 15:28:07.269420] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:53.856 [2024-07-11 15:28:07.269445] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:53.856 [2024-07-11 15:28:07.269521] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:53.856 [2024-07-11 15:28:07.269542] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:53.856 [2024-07-11 15:28:07.269644] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:53.856 [2024-07-11 15:28:07.269659] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:53.857 [2024-07-11 15:28:07.269674] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:53.857 [2024-07-11 15:28:07.269688] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:53.857 [2024-07-11 15:28:07.269701] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:53.857 [2024-07-11 15:28:07.269714] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:53.857 [2024-07-11 15:28:07.269729] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:53.857 [2024-07-11 15:28:07.269740] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:53.857 [2024-07-11 15:28:07.269751] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:53.857 [2024-07-11 15:28:07.269762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.857 [2024-07-11 15:28:07.269773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:53.857 [2024-07-11 15:28:07.269784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:19:53.857 [2024-07-11 15:28:07.269795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.857 [2024-07-11 15:28:07.269925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.857 [2024-07-11 15:28:07.269940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:53.857 [2024-07-11 15:28:07.269952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:53.857 [2024-07-11 15:28:07.269967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.857 [2024-07-11 15:28:07.270120] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:53.857 [2024-07-11 15:28:07.270140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:53.857 [2024-07-11 15:28:07.270153] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:53.857 [2024-07-11 15:28:07.270164] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.857 [2024-07-11 15:28:07.270176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:53.857 [2024-07-11 15:28:07.270187] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:53.857 [2024-07-11 15:28:07.270198] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:53.857 [2024-07-11 15:28:07.270210] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:53.857 [2024-07-11 15:28:07.270220] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:53.857 [2024-07-11 15:28:07.270231] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:53.857 [2024-07-11 15:28:07.270241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:53.857 [2024-07-11 15:28:07.270252] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:53.857 [2024-07-11 15:28:07.270262] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:53.857 [2024-07-11 15:28:07.270272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:53.857 [2024-07-11 15:28:07.270283] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:53.857 [2024-07-11 15:28:07.270293] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.857 [2024-07-11 15:28:07.270303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:53.857 [2024-07-11 15:28:07.270314] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:53.857 [2024-07-11 15:28:07.270339] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.857 [2024-07-11 15:28:07.270351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:53.857 [2024-07-11 15:28:07.270363] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:53.857 [2024-07-11 15:28:07.270373] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:53.857 [2024-07-11 15:28:07.270383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:53.857 [2024-07-11 15:28:07.270408] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:53.857 [2024-07-11 15:28:07.270418] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:53.857 [2024-07-11 15:28:07.270428] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:53.857 [2024-07-11 15:28:07.270455] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:53.857 [2024-07-11 15:28:07.270465] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:53.857 [2024-07-11 15:28:07.270476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:53.857 [2024-07-11 15:28:07.270486] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:53.857 [2024-07-11 15:28:07.270496] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:53.857 [2024-07-11 15:28:07.270507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:53.857 [2024-07-11 15:28:07.270517] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:53.857 [2024-07-11 15:28:07.270528] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:53.857 [2024-07-11 15:28:07.270538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:53.857 [2024-07-11 15:28:07.270549] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:53.857 [2024-07-11 15:28:07.270559] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:53.857 [2024-07-11 15:28:07.270570] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:53.857 [2024-07-11 15:28:07.270580] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:53.857 [2024-07-11 15:28:07.270590] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.857 [2024-07-11 15:28:07.270601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:53.857 [2024-07-11 15:28:07.270611] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:53.857 [2024-07-11 15:28:07.270622] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.857 [2024-07-11 15:28:07.270632] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:53.857 [2024-07-11 15:28:07.270644] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:53.857 [2024-07-11 15:28:07.270654] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:53.857 [2024-07-11 15:28:07.270665] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.857 [2024-07-11 15:28:07.270677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:53.857 [2024-07-11 15:28:07.270688] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:53.857 [2024-07-11 15:28:07.270698] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:53.857 [2024-07-11 15:28:07.270709] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:53.857 [2024-07-11 15:28:07.270721] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:53.857 [2024-07-11 15:28:07.270731] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:53.857 [2024-07-11 15:28:07.270744] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:53.857 [2024-07-11 15:28:07.270763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:53.857 [2024-07-11 15:28:07.270776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:53.857 [2024-07-11 15:28:07.270788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:53.857 [2024-07-11 15:28:07.270799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:53.857 [2024-07-11 15:28:07.270811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:53.857 [2024-07-11 15:28:07.270822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:53.857 [2024-07-11 15:28:07.270833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:53.857 [2024-07-11 15:28:07.270844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:53.857 [2024-07-11 15:28:07.270856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:53.857 [2024-07-11 15:28:07.270867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:53.857 [2024-07-11 15:28:07.270878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:53.857 [2024-07-11 15:28:07.270890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:53.857 [2024-07-11 15:28:07.270901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:53.857 [2024-07-11 15:28:07.270912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:53.857 [2024-07-11 15:28:07.270924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:53.857 [2024-07-11 15:28:07.270936] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:53.857 [2024-07-11 15:28:07.270948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:53.858 [2024-07-11 15:28:07.270961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:53.858 [2024-07-11 15:28:07.270973] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:53.858 [2024-07-11 15:28:07.270984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:53.858 [2024-07-11 15:28:07.270996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:53.858 [2024-07-11 15:28:07.271009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.858 [2024-07-11 15:28:07.271020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:53.858 [2024-07-11 15:28:07.271032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:19:53.858 [2024-07-11 15:28:07.271043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-07-11 15:28:07.308830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.858 [2024-07-11 15:28:07.309287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:53.858 [2024-07-11 15:28:07.309329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.685 ms 00:19:53.858 [2024-07-11 15:28:07.309346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-07-11 15:28:07.309545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.858 [2024-07-11 15:28:07.309566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:53.858 [2024-07-11 15:28:07.309581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:53.858 [2024-07-11 15:28:07.309600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-07-11 15:28:07.346270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.858 [2024-07-11 15:28:07.346327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:53.858 [2024-07-11 15:28:07.346359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.636 ms 00:19:53.858 [2024-07-11 15:28:07.346370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-07-11 15:28:07.346518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.858 [2024-07-11 15:28:07.346537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:53.858 [2024-07-11 15:28:07.346550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:53.858 [2024-07-11 15:28:07.346562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-07-11 15:28:07.346901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.858 [2024-07-11 15:28:07.346918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:53.858 [2024-07-11 15:28:07.346930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:19:53.858 [2024-07-11 15:28:07.346941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-07-11 15:28:07.347104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.858 [2024-07-11 15:28:07.347145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:53.858 [2024-07-11 15:28:07.347157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:19:53.858 [2024-07-11 15:28:07.347168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-07-11 15:28:07.363085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.858 [2024-07-11 15:28:07.363140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:53.858 [2024-07-11 15:28:07.363172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.889 ms 00:19:53.858 [2024-07-11 15:28:07.363183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-07-11 15:28:07.379153] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:53.858 [2024-07-11 15:28:07.379194] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:53.858 [2024-07-11 15:28:07.379229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.858 [2024-07-11 15:28:07.379240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:53.858 [2024-07-11 15:28:07.379253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.883 ms 00:19:53.858 [2024-07-11 15:28:07.379263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-07-11 15:28:07.407987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.858 [2024-07-11 15:28:07.408055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:53.858 [2024-07-11 15:28:07.408090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.636 ms 00:19:53.858 [2024-07-11 15:28:07.408102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-07-11 15:28:07.423560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.858 [2024-07-11 15:28:07.423601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:53.858 [2024-07-11 15:28:07.423634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.350 ms 00:19:53.858 [2024-07-11 15:28:07.423645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-07-11 15:28:07.438863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.858 [2024-07-11 15:28:07.438901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:53.858 [2024-07-11 15:28:07.438933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.131 ms 00:19:53.858 [2024-07-11 15:28:07.438943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.858 [2024-07-11 15:28:07.439790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.858 [2024-07-11 15:28:07.439860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:53.858 [2024-07-11 15:28:07.439892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:19:53.858 [2024-07-11 15:28:07.439904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.117 [2024-07-11 15:28:07.509334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.117 [2024-07-11 15:28:07.509407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:54.117 [2024-07-11 15:28:07.509442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.397 ms 00:19:54.117 [2024-07-11 15:28:07.509470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.117 [2024-07-11 15:28:07.521282] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:54.117 [2024-07-11 15:28:07.534186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.117 [2024-07-11 15:28:07.534249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:54.117 [2024-07-11 15:28:07.534286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.578 ms 00:19:54.117 [2024-07-11 15:28:07.534298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.117 [2024-07-11 15:28:07.534478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.117 [2024-07-11 15:28:07.534498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:54.117 [2024-07-11 15:28:07.534516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:54.117 [2024-07-11 15:28:07.534527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.117 [2024-07-11 15:28:07.534588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.117 [2024-07-11 15:28:07.534603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:54.117 [2024-07-11 15:28:07.534615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:54.117 [2024-07-11 15:28:07.534626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.117 [2024-07-11 15:28:07.534653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.117 [2024-07-11 15:28:07.534665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:54.117 [2024-07-11 15:28:07.534676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:54.117 [2024-07-11 15:28:07.534691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.117 [2024-07-11 15:28:07.534729] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:54.117 [2024-07-11 15:28:07.534747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.117 [2024-07-11 15:28:07.534759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:54.117 [2024-07-11 15:28:07.534771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:54.117 [2024-07-11 15:28:07.534783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.117 [2024-07-11 15:28:07.566161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.117 [2024-07-11 15:28:07.566337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:54.117 [2024-07-11 15:28:07.566463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.335 ms 00:19:54.117 [2024-07-11 15:28:07.566514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.117 [2024-07-11 15:28:07.566672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.117 [2024-07-11 15:28:07.566746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:54.117 [2024-07-11 15:28:07.566794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:54.117 [2024-07-11 15:28:07.566834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.117 [2024-07-11 15:28:07.567938] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:54.117 [2024-07-11 15:28:07.572322] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 331.220 ms, result 0 00:19:54.117 [2024-07-11 15:28:07.573226] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:54.117 [2024-07-11 15:28:07.589865] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:04.712  Copying: 26/256 [MB] (26 MBps) Copying: 50/256 [MB] (23 MBps) Copying: 74/256 [MB] (23 MBps) Copying: 97/256 [MB] (23 MBps) Copying: 121/256 [MB] (23 MBps) Copying: 146/256 [MB] (24 MBps) Copying: 170/256 [MB] (24 MBps) Copying: 194/256 [MB] (23 MBps) Copying: 218/256 [MB] (23 MBps) Copying: 242/256 [MB] (23 MBps) Copying: 256/256 [MB] (average 24 MBps)[2024-07-11 15:28:18.229126] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:04.712 [2024-07-11 15:28:18.240988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.712 [2024-07-11 15:28:18.241061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:04.712 [2024-07-11 15:28:18.241098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:04.712 [2024-07-11 15:28:18.241110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.712 [2024-07-11 15:28:18.241140] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:04.712 [2024-07-11 15:28:18.244286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.712 [2024-07-11 15:28:18.244323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:04.712 [2024-07-11 15:28:18.244354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.126 ms 00:20:04.712 [2024-07-11 15:28:18.244364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.712 [2024-07-11 15:28:18.244662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.712 [2024-07-11 15:28:18.244680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:04.712 [2024-07-11 15:28:18.244692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:20:04.712 [2024-07-11 15:28:18.244703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.712 [2024-07-11 15:28:18.248355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.712 [2024-07-11 15:28:18.248381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:04.712 [2024-07-11 15:28:18.248411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.630 ms 00:20:04.712 [2024-07-11 15:28:18.248427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.712 [2024-07-11 15:28:18.255586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.712 [2024-07-11 15:28:18.255615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:04.712 [2024-07-11 15:28:18.255645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.136 ms 00:20:04.712 [2024-07-11 15:28:18.255657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.712 [2024-07-11 15:28:18.285110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.712 [2024-07-11 15:28:18.285153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:04.712 [2024-07-11 15:28:18.285187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.383 ms 00:20:04.712 [2024-07-11 15:28:18.285199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.712 [2024-07-11 15:28:18.303495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.712 [2024-07-11 15:28:18.303566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:04.712 [2024-07-11 15:28:18.303588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.135 ms 00:20:04.712 [2024-07-11 15:28:18.303601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.712 [2024-07-11 15:28:18.303834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.712 [2024-07-11 15:28:18.303856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:04.712 [2024-07-11 15:28:18.303869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:20:04.712 [2024-07-11 15:28:18.303880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.972 [2024-07-11 15:28:18.334968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.972 [2024-07-11 15:28:18.335009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:04.972 [2024-07-11 15:28:18.335055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.065 ms 00:20:04.972 [2024-07-11 15:28:18.335068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.972 [2024-07-11 15:28:18.363978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.972 [2024-07-11 15:28:18.364016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:04.972 [2024-07-11 15:28:18.364062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.830 ms 00:20:04.972 [2024-07-11 15:28:18.364073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.972 [2024-07-11 15:28:18.393226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.972 [2024-07-11 15:28:18.393265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:04.972 [2024-07-11 15:28:18.393297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.074 ms 00:20:04.972 [2024-07-11 15:28:18.393307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.972 [2024-07-11 15:28:18.422327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.972 [2024-07-11 15:28:18.422369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:04.972 [2024-07-11 15:28:18.422386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.927 ms 00:20:04.972 [2024-07-11 15:28:18.422397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.972 [2024-07-11 15:28:18.422480] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:04.972 [2024-07-11 15:28:18.422520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:04.972 [2024-07-11 15:28:18.422900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.422912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.422923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.422933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.422944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.422955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.422966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.422976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.422987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.422998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:04.973 [2024-07-11 15:28:18.423750] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:04.973 [2024-07-11 15:28:18.423762] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f0bc7114-fa44-4b7b-b47f-5e2eeb48ec58 00:20:04.973 [2024-07-11 15:28:18.423774] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:04.973 [2024-07-11 15:28:18.423785] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:04.973 [2024-07-11 15:28:18.423809] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:04.973 [2024-07-11 15:28:18.423820] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:04.973 [2024-07-11 15:28:18.423831] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:04.973 [2024-07-11 15:28:18.423843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:04.973 [2024-07-11 15:28:18.423854] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:04.973 [2024-07-11 15:28:18.423864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:04.973 [2024-07-11 15:28:18.423874] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:04.973 [2024-07-11 15:28:18.423886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.973 [2024-07-11 15:28:18.423898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:04.973 [2024-07-11 15:28:18.423920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.408 ms 00:20:04.973 [2024-07-11 15:28:18.423935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.973 [2024-07-11 15:28:18.440100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.973 [2024-07-11 15:28:18.440138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:04.973 [2024-07-11 15:28:18.440171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.139 ms 00:20:04.973 [2024-07-11 15:28:18.440183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.973 [2024-07-11 15:28:18.440602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.973 [2024-07-11 15:28:18.440624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:04.973 [2024-07-11 15:28:18.440644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:20:04.973 [2024-07-11 15:28:18.440655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.973 [2024-07-11 15:28:18.478289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.973 [2024-07-11 15:28:18.478374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:04.973 [2024-07-11 15:28:18.478408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.973 [2024-07-11 15:28:18.478434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.973 [2024-07-11 15:28:18.478539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.973 [2024-07-11 15:28:18.478556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:04.973 [2024-07-11 15:28:18.478574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.973 [2024-07-11 15:28:18.478585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.973 [2024-07-11 15:28:18.478643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.973 [2024-07-11 15:28:18.478661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:04.973 [2024-07-11 15:28:18.478673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.973 [2024-07-11 15:28:18.478684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.973 [2024-07-11 15:28:18.478707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.974 [2024-07-11 15:28:18.478720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:04.974 [2024-07-11 15:28:18.478731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.974 [2024-07-11 15:28:18.478748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.974 [2024-07-11 15:28:18.571444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.974 [2024-07-11 15:28:18.571509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:04.974 [2024-07-11 15:28:18.571543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.974 [2024-07-11 15:28:18.571554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.233 [2024-07-11 15:28:18.649640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.233 [2024-07-11 15:28:18.649706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:05.233 [2024-07-11 15:28:18.649742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.233 [2024-07-11 15:28:18.649760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.233 [2024-07-11 15:28:18.649852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.233 [2024-07-11 15:28:18.649882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:05.233 [2024-07-11 15:28:18.649893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.233 [2024-07-11 15:28:18.649904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.233 [2024-07-11 15:28:18.649934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.233 [2024-07-11 15:28:18.649947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:05.233 [2024-07-11 15:28:18.649957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.233 [2024-07-11 15:28:18.649967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.233 [2024-07-11 15:28:18.650132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.233 [2024-07-11 15:28:18.650154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:05.233 [2024-07-11 15:28:18.650167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.233 [2024-07-11 15:28:18.650179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.233 [2024-07-11 15:28:18.650230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.233 [2024-07-11 15:28:18.650248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:05.233 [2024-07-11 15:28:18.650260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.233 [2024-07-11 15:28:18.650272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.233 [2024-07-11 15:28:18.650324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.233 [2024-07-11 15:28:18.650355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:05.233 [2024-07-11 15:28:18.650366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.233 [2024-07-11 15:28:18.650392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.233 [2024-07-11 15:28:18.650467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.233 [2024-07-11 15:28:18.650482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:05.233 [2024-07-11 15:28:18.650493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.233 [2024-07-11 15:28:18.650503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.233 [2024-07-11 15:28:18.650653] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 409.664 ms, result 0 00:20:06.167 00:20:06.167 00:20:06.167 15:28:19 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:06.735 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:06.735 15:28:20 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:06.735 15:28:20 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:06.735 15:28:20 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:06.735 15:28:20 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:06.735 15:28:20 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:06.735 15:28:20 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:06.735 Process with pid 80576 is not found 00:20:06.735 15:28:20 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 80576 00:20:06.735 15:28:20 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 80576 ']' 00:20:06.735 15:28:20 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 80576 00:20:06.735 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (80576) - No such process 00:20:06.735 15:28:20 ftl.ftl_trim -- common/autotest_common.sh@975 -- # echo 'Process with pid 80576 is not found' 00:20:06.735 ************************************ 00:20:06.735 END TEST ftl_trim 00:20:06.735 ************************************ 00:20:06.735 00:20:06.735 real 1m9.047s 00:20:06.735 user 1m34.166s 00:20:06.735 sys 0m6.321s 00:20:06.735 15:28:20 ftl.ftl_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:06.735 15:28:20 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:06.735 15:28:20 ftl -- common/autotest_common.sh@1142 -- # return 0 00:20:06.993 15:28:20 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:06.993 15:28:20 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:06.993 15:28:20 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:06.993 15:28:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:06.993 ************************************ 00:20:06.993 START TEST ftl_restore 00:20:06.993 ************************************ 00:20:06.993 15:28:20 ftl.ftl_restore -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:06.993 * Looking for test storage... 00:20:06.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:06.993 15:28:20 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:06.993 15:28:20 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.8X6yce434g 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=80835 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 80835 00:20:06.994 15:28:20 ftl.ftl_restore -- common/autotest_common.sh@829 -- # '[' -z 80835 ']' 00:20:06.994 15:28:20 ftl.ftl_restore -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.994 15:28:20 ftl.ftl_restore -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:06.994 15:28:20 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:06.994 15:28:20 ftl.ftl_restore -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.994 15:28:20 ftl.ftl_restore -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:06.994 15:28:20 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:06.994 [2024-07-11 15:28:20.589153] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:06.994 [2024-07-11 15:28:20.589759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80835 ] 00:20:07.252 [2024-07-11 15:28:20.766072] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.510 [2024-07-11 15:28:20.987421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.075 15:28:21 ftl.ftl_restore -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:08.075 15:28:21 ftl.ftl_restore -- common/autotest_common.sh@862 -- # return 0 00:20:08.075 15:28:21 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:08.075 15:28:21 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:08.075 15:28:21 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:08.075 15:28:21 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:08.075 15:28:21 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:08.075 15:28:21 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:08.640 15:28:21 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:08.640 15:28:21 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:08.640 15:28:21 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:08.640 15:28:21 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:20:08.640 15:28:21 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:08.640 15:28:21 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:20:08.640 15:28:21 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:20:08.640 15:28:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:08.898 15:28:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:08.898 { 00:20:08.898 "name": "nvme0n1", 00:20:08.898 "aliases": [ 00:20:08.898 "0b2065cf-e2b6-4404-b412-d5d04b3d6b52" 00:20:08.898 ], 00:20:08.898 "product_name": "NVMe disk", 00:20:08.898 "block_size": 4096, 00:20:08.898 "num_blocks": 1310720, 00:20:08.898 "uuid": "0b2065cf-e2b6-4404-b412-d5d04b3d6b52", 00:20:08.898 "assigned_rate_limits": { 00:20:08.898 "rw_ios_per_sec": 0, 00:20:08.898 "rw_mbytes_per_sec": 0, 00:20:08.898 "r_mbytes_per_sec": 0, 00:20:08.898 "w_mbytes_per_sec": 0 00:20:08.898 }, 00:20:08.898 "claimed": true, 00:20:08.898 "claim_type": "read_many_write_one", 00:20:08.898 "zoned": false, 00:20:08.898 "supported_io_types": { 00:20:08.898 "read": true, 00:20:08.898 "write": true, 00:20:08.898 "unmap": true, 00:20:08.898 "flush": true, 00:20:08.898 "reset": true, 00:20:08.898 "nvme_admin": true, 00:20:08.898 "nvme_io": true, 00:20:08.898 "nvme_io_md": false, 00:20:08.898 "write_zeroes": true, 00:20:08.898 "zcopy": false, 00:20:08.898 "get_zone_info": false, 00:20:08.898 "zone_management": false, 00:20:08.898 "zone_append": false, 00:20:08.898 "compare": true, 00:20:08.898 "compare_and_write": false, 00:20:08.898 "abort": true, 00:20:08.898 "seek_hole": false, 00:20:08.898 "seek_data": false, 00:20:08.898 "copy": true, 00:20:08.898 "nvme_iov_md": false 00:20:08.898 }, 00:20:08.898 "driver_specific": { 00:20:08.898 "nvme": [ 00:20:08.898 { 00:20:08.898 "pci_address": "0000:00:11.0", 00:20:08.898 "trid": { 00:20:08.898 "trtype": "PCIe", 00:20:08.898 "traddr": "0000:00:11.0" 00:20:08.898 }, 00:20:08.898 "ctrlr_data": { 00:20:08.898 "cntlid": 0, 00:20:08.898 "vendor_id": "0x1b36", 00:20:08.898 "model_number": "QEMU NVMe Ctrl", 00:20:08.898 "serial_number": "12341", 00:20:08.898 "firmware_revision": "8.0.0", 00:20:08.898 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:08.898 "oacs": { 00:20:08.898 "security": 0, 00:20:08.898 "format": 1, 00:20:08.898 "firmware": 0, 00:20:08.898 "ns_manage": 1 00:20:08.898 }, 00:20:08.898 "multi_ctrlr": false, 00:20:08.898 "ana_reporting": false 00:20:08.898 }, 00:20:08.898 "vs": { 00:20:08.898 "nvme_version": "1.4" 00:20:08.898 }, 00:20:08.898 "ns_data": { 00:20:08.898 "id": 1, 00:20:08.898 "can_share": false 00:20:08.898 } 00:20:08.898 } 00:20:08.898 ], 00:20:08.898 "mp_policy": "active_passive" 00:20:08.898 } 00:20:08.898 } 00:20:08.898 ]' 00:20:08.898 15:28:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:08.898 15:28:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:20:08.898 15:28:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:08.898 15:28:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:20:08.898 15:28:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:20:08.898 15:28:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:20:08.898 15:28:22 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:08.898 15:28:22 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:08.898 15:28:22 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:08.898 15:28:22 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:08.898 15:28:22 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:09.156 15:28:22 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=b16aea30-0e83-45be-a8bc-8cc9cf3452e3 00:20:09.156 15:28:22 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:09.156 15:28:22 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b16aea30-0e83-45be-a8bc-8cc9cf3452e3 00:20:09.414 15:28:22 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:09.672 15:28:23 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=da5579c1-3d53-4ce5-9d75-a73c22500a11 00:20:09.672 15:28:23 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u da5579c1-3d53-4ce5-9d75-a73c22500a11 00:20:09.929 15:28:23 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=97e4aab3-2719-4dd3-832e-b398b9c9e691 00:20:09.929 15:28:23 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:09.929 15:28:23 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 97e4aab3-2719-4dd3-832e-b398b9c9e691 00:20:09.929 15:28:23 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:09.929 15:28:23 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:09.929 15:28:23 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=97e4aab3-2719-4dd3-832e-b398b9c9e691 00:20:09.929 15:28:23 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:09.929 15:28:23 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 97e4aab3-2719-4dd3-832e-b398b9c9e691 00:20:09.929 15:28:23 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=97e4aab3-2719-4dd3-832e-b398b9c9e691 00:20:09.929 15:28:23 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:09.929 15:28:23 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:20:09.929 15:28:23 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:20:09.929 15:28:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 97e4aab3-2719-4dd3-832e-b398b9c9e691 00:20:10.186 15:28:23 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:10.186 { 00:20:10.186 "name": "97e4aab3-2719-4dd3-832e-b398b9c9e691", 00:20:10.186 "aliases": [ 00:20:10.186 "lvs/nvme0n1p0" 00:20:10.186 ], 00:20:10.186 "product_name": "Logical Volume", 00:20:10.186 "block_size": 4096, 00:20:10.186 "num_blocks": 26476544, 00:20:10.186 "uuid": "97e4aab3-2719-4dd3-832e-b398b9c9e691", 00:20:10.186 "assigned_rate_limits": { 00:20:10.186 "rw_ios_per_sec": 0, 00:20:10.186 "rw_mbytes_per_sec": 0, 00:20:10.186 "r_mbytes_per_sec": 0, 00:20:10.186 "w_mbytes_per_sec": 0 00:20:10.186 }, 00:20:10.186 "claimed": false, 00:20:10.186 "zoned": false, 00:20:10.186 "supported_io_types": { 00:20:10.186 "read": true, 00:20:10.186 "write": true, 00:20:10.186 "unmap": true, 00:20:10.186 "flush": false, 00:20:10.186 "reset": true, 00:20:10.186 "nvme_admin": false, 00:20:10.186 "nvme_io": false, 00:20:10.186 "nvme_io_md": false, 00:20:10.186 "write_zeroes": true, 00:20:10.186 "zcopy": false, 00:20:10.186 "get_zone_info": false, 00:20:10.186 "zone_management": false, 00:20:10.186 "zone_append": false, 00:20:10.186 "compare": false, 00:20:10.186 "compare_and_write": false, 00:20:10.186 "abort": false, 00:20:10.186 "seek_hole": true, 00:20:10.186 "seek_data": true, 00:20:10.186 "copy": false, 00:20:10.186 "nvme_iov_md": false 00:20:10.186 }, 00:20:10.186 "driver_specific": { 00:20:10.186 "lvol": { 00:20:10.186 "lvol_store_uuid": "da5579c1-3d53-4ce5-9d75-a73c22500a11", 00:20:10.186 "base_bdev": "nvme0n1", 00:20:10.186 "thin_provision": true, 00:20:10.186 "num_allocated_clusters": 0, 00:20:10.186 "snapshot": false, 00:20:10.186 "clone": false, 00:20:10.186 "esnap_clone": false 00:20:10.186 } 00:20:10.186 } 00:20:10.186 } 00:20:10.186 ]' 00:20:10.186 15:28:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:10.186 15:28:23 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:20:10.186 15:28:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:10.186 15:28:23 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:10.186 15:28:23 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:10.186 15:28:23 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:20:10.186 15:28:23 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:10.186 15:28:23 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:10.186 15:28:23 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:10.444 15:28:24 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:10.444 15:28:24 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:10.444 15:28:24 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 97e4aab3-2719-4dd3-832e-b398b9c9e691 00:20:10.444 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=97e4aab3-2719-4dd3-832e-b398b9c9e691 00:20:10.444 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:10.444 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:20:10.444 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:20:10.444 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 97e4aab3-2719-4dd3-832e-b398b9c9e691 00:20:10.702 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:10.702 { 00:20:10.702 "name": "97e4aab3-2719-4dd3-832e-b398b9c9e691", 00:20:10.702 "aliases": [ 00:20:10.702 "lvs/nvme0n1p0" 00:20:10.702 ], 00:20:10.702 "product_name": "Logical Volume", 00:20:10.702 "block_size": 4096, 00:20:10.702 "num_blocks": 26476544, 00:20:10.702 "uuid": "97e4aab3-2719-4dd3-832e-b398b9c9e691", 00:20:10.702 "assigned_rate_limits": { 00:20:10.702 "rw_ios_per_sec": 0, 00:20:10.702 "rw_mbytes_per_sec": 0, 00:20:10.702 "r_mbytes_per_sec": 0, 00:20:10.702 "w_mbytes_per_sec": 0 00:20:10.702 }, 00:20:10.702 "claimed": false, 00:20:10.702 "zoned": false, 00:20:10.702 "supported_io_types": { 00:20:10.702 "read": true, 00:20:10.702 "write": true, 00:20:10.702 "unmap": true, 00:20:10.702 "flush": false, 00:20:10.702 "reset": true, 00:20:10.702 "nvme_admin": false, 00:20:10.702 "nvme_io": false, 00:20:10.702 "nvme_io_md": false, 00:20:10.702 "write_zeroes": true, 00:20:10.702 "zcopy": false, 00:20:10.702 "get_zone_info": false, 00:20:10.702 "zone_management": false, 00:20:10.702 "zone_append": false, 00:20:10.702 "compare": false, 00:20:10.702 "compare_and_write": false, 00:20:10.702 "abort": false, 00:20:10.702 "seek_hole": true, 00:20:10.702 "seek_data": true, 00:20:10.702 "copy": false, 00:20:10.702 "nvme_iov_md": false 00:20:10.702 }, 00:20:10.703 "driver_specific": { 00:20:10.703 "lvol": { 00:20:10.703 "lvol_store_uuid": "da5579c1-3d53-4ce5-9d75-a73c22500a11", 00:20:10.703 "base_bdev": "nvme0n1", 00:20:10.703 "thin_provision": true, 00:20:10.703 "num_allocated_clusters": 0, 00:20:10.703 "snapshot": false, 00:20:10.703 "clone": false, 00:20:10.703 "esnap_clone": false 00:20:10.703 } 00:20:10.703 } 00:20:10.703 } 00:20:10.703 ]' 00:20:10.703 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:10.960 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:20:10.960 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:10.960 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:10.960 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:10.960 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:20:10.960 15:28:24 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:10.960 15:28:24 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:11.218 15:28:24 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:11.218 15:28:24 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 97e4aab3-2719-4dd3-832e-b398b9c9e691 00:20:11.218 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=97e4aab3-2719-4dd3-832e-b398b9c9e691 00:20:11.218 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:11.218 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:20:11.218 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:20:11.218 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 97e4aab3-2719-4dd3-832e-b398b9c9e691 00:20:11.476 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:11.476 { 00:20:11.476 "name": "97e4aab3-2719-4dd3-832e-b398b9c9e691", 00:20:11.476 "aliases": [ 00:20:11.476 "lvs/nvme0n1p0" 00:20:11.476 ], 00:20:11.476 "product_name": "Logical Volume", 00:20:11.476 "block_size": 4096, 00:20:11.476 "num_blocks": 26476544, 00:20:11.476 "uuid": "97e4aab3-2719-4dd3-832e-b398b9c9e691", 00:20:11.476 "assigned_rate_limits": { 00:20:11.476 "rw_ios_per_sec": 0, 00:20:11.476 "rw_mbytes_per_sec": 0, 00:20:11.476 "r_mbytes_per_sec": 0, 00:20:11.476 "w_mbytes_per_sec": 0 00:20:11.476 }, 00:20:11.476 "claimed": false, 00:20:11.476 "zoned": false, 00:20:11.476 "supported_io_types": { 00:20:11.476 "read": true, 00:20:11.476 "write": true, 00:20:11.476 "unmap": true, 00:20:11.476 "flush": false, 00:20:11.476 "reset": true, 00:20:11.476 "nvme_admin": false, 00:20:11.476 "nvme_io": false, 00:20:11.476 "nvme_io_md": false, 00:20:11.476 "write_zeroes": true, 00:20:11.476 "zcopy": false, 00:20:11.476 "get_zone_info": false, 00:20:11.476 "zone_management": false, 00:20:11.476 "zone_append": false, 00:20:11.476 "compare": false, 00:20:11.476 "compare_and_write": false, 00:20:11.476 "abort": false, 00:20:11.476 "seek_hole": true, 00:20:11.476 "seek_data": true, 00:20:11.476 "copy": false, 00:20:11.476 "nvme_iov_md": false 00:20:11.476 }, 00:20:11.476 "driver_specific": { 00:20:11.476 "lvol": { 00:20:11.476 "lvol_store_uuid": "da5579c1-3d53-4ce5-9d75-a73c22500a11", 00:20:11.476 "base_bdev": "nvme0n1", 00:20:11.476 "thin_provision": true, 00:20:11.476 "num_allocated_clusters": 0, 00:20:11.476 "snapshot": false, 00:20:11.476 "clone": false, 00:20:11.476 "esnap_clone": false 00:20:11.476 } 00:20:11.476 } 00:20:11.476 } 00:20:11.476 ]' 00:20:11.476 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:11.476 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:20:11.476 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:11.476 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:11.476 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:11.476 15:28:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:20:11.476 15:28:24 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:11.476 15:28:24 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 97e4aab3-2719-4dd3-832e-b398b9c9e691 --l2p_dram_limit 10' 00:20:11.476 15:28:24 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:11.476 15:28:24 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:11.476 15:28:24 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:11.476 15:28:24 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:11.476 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:11.476 15:28:24 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 97e4aab3-2719-4dd3-832e-b398b9c9e691 --l2p_dram_limit 10 -c nvc0n1p0 00:20:11.735 [2024-07-11 15:28:25.165922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.735 [2024-07-11 15:28:25.165982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:11.735 [2024-07-11 15:28:25.166081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:11.735 [2024-07-11 15:28:25.166100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.735 [2024-07-11 15:28:25.166184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.735 [2024-07-11 15:28:25.166206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:11.735 [2024-07-11 15:28:25.166220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:11.735 [2024-07-11 15:28:25.166234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.735 [2024-07-11 15:28:25.166265] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:11.735 [2024-07-11 15:28:25.167287] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:11.735 [2024-07-11 15:28:25.167328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.735 [2024-07-11 15:28:25.167348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:11.735 [2024-07-11 15:28:25.167360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.070 ms 00:20:11.735 [2024-07-11 15:28:25.167374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.735 [2024-07-11 15:28:25.167511] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ae11445a-6c38-4afe-8ce2-49581ac79788 00:20:11.735 [2024-07-11 15:28:25.168640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.735 [2024-07-11 15:28:25.168679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:11.735 [2024-07-11 15:28:25.168716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:11.735 [2024-07-11 15:28:25.168728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.735 [2024-07-11 15:28:25.173236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.735 [2024-07-11 15:28:25.173277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:11.735 [2024-07-11 15:28:25.173300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.452 ms 00:20:11.735 [2024-07-11 15:28:25.173312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.735 [2024-07-11 15:28:25.173425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.735 [2024-07-11 15:28:25.173443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:11.735 [2024-07-11 15:28:25.173458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:11.735 [2024-07-11 15:28:25.173469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.735 [2024-07-11 15:28:25.173551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.735 [2024-07-11 15:28:25.173580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:11.735 [2024-07-11 15:28:25.173594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:11.735 [2024-07-11 15:28:25.173608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.735 [2024-07-11 15:28:25.173642] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:11.735 [2024-07-11 15:28:25.178134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.735 [2024-07-11 15:28:25.178180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:11.735 [2024-07-11 15:28:25.178214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.503 ms 00:20:11.735 [2024-07-11 15:28:25.178231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.735 [2024-07-11 15:28:25.178293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.735 [2024-07-11 15:28:25.178337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:11.735 [2024-07-11 15:28:25.178366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:11.735 [2024-07-11 15:28:25.178379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.735 [2024-07-11 15:28:25.178434] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:11.735 [2024-07-11 15:28:25.178595] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:11.735 [2024-07-11 15:28:25.178613] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:11.735 [2024-07-11 15:28:25.178632] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:11.735 [2024-07-11 15:28:25.178648] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:11.735 [2024-07-11 15:28:25.178664] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:11.735 [2024-07-11 15:28:25.178677] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:11.735 [2024-07-11 15:28:25.178690] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:11.735 [2024-07-11 15:28:25.178703] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:11.735 [2024-07-11 15:28:25.178717] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:11.735 [2024-07-11 15:28:25.178729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.735 [2024-07-11 15:28:25.178742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:11.735 [2024-07-11 15:28:25.178754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:20:11.735 [2024-07-11 15:28:25.178766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.735 [2024-07-11 15:28:25.178864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.735 [2024-07-11 15:28:25.178881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:11.735 [2024-07-11 15:28:25.178893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:11.735 [2024-07-11 15:28:25.178906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.735 [2024-07-11 15:28:25.179004] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:11.735 [2024-07-11 15:28:25.179024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:11.735 [2024-07-11 15:28:25.179048] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:11.735 [2024-07-11 15:28:25.179062] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.735 [2024-07-11 15:28:25.179074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:11.735 [2024-07-11 15:28:25.179086] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:11.735 [2024-07-11 15:28:25.179096] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:11.735 [2024-07-11 15:28:25.179150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:11.735 [2024-07-11 15:28:25.179164] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:11.735 [2024-07-11 15:28:25.179177] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:11.735 [2024-07-11 15:28:25.179187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:11.735 [2024-07-11 15:28:25.179201] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:11.735 [2024-07-11 15:28:25.179212] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:11.735 [2024-07-11 15:28:25.179226] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:11.735 [2024-07-11 15:28:25.179237] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:11.735 [2024-07-11 15:28:25.179249] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.735 [2024-07-11 15:28:25.179260] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:11.735 [2024-07-11 15:28:25.179274] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:11.735 [2024-07-11 15:28:25.179285] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.735 [2024-07-11 15:28:25.179298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:11.735 [2024-07-11 15:28:25.179308] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:11.735 [2024-07-11 15:28:25.179320] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.735 [2024-07-11 15:28:25.179331] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:11.735 [2024-07-11 15:28:25.179343] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:11.735 [2024-07-11 15:28:25.179354] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.735 [2024-07-11 15:28:25.179369] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:11.735 [2024-07-11 15:28:25.179379] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:11.735 [2024-07-11 15:28:25.179391] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.735 [2024-07-11 15:28:25.179402] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:11.735 [2024-07-11 15:28:25.179414] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:11.735 [2024-07-11 15:28:25.179424] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:11.735 [2024-07-11 15:28:25.179436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:11.735 [2024-07-11 15:28:25.179463] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:11.735 [2024-07-11 15:28:25.179494] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:11.735 [2024-07-11 15:28:25.179506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:11.735 [2024-07-11 15:28:25.179518] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:11.735 [2024-07-11 15:28:25.179529] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:11.735 [2024-07-11 15:28:25.179541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:11.735 [2024-07-11 15:28:25.179552] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:11.735 [2024-07-11 15:28:25.179567] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.735 [2024-07-11 15:28:25.179578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:11.735 [2024-07-11 15:28:25.179590] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:11.735 [2024-07-11 15:28:25.179601] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.735 [2024-07-11 15:28:25.179612] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:11.735 [2024-07-11 15:28:25.179625] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:11.735 [2024-07-11 15:28:25.179637] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:11.735 [2024-07-11 15:28:25.179649] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.735 [2024-07-11 15:28:25.179663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:11.735 [2024-07-11 15:28:25.179674] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:11.735 [2024-07-11 15:28:25.179689] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:11.735 [2024-07-11 15:28:25.179700] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:11.735 [2024-07-11 15:28:25.179712] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:11.735 [2024-07-11 15:28:25.179723] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:11.735 [2024-07-11 15:28:25.179741] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:11.735 [2024-07-11 15:28:25.179755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:11.735 [2024-07-11 15:28:25.179788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:11.735 [2024-07-11 15:28:25.179801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:11.735 [2024-07-11 15:28:25.179815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:11.735 [2024-07-11 15:28:25.179826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:11.735 [2024-07-11 15:28:25.179855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:11.735 [2024-07-11 15:28:25.179881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:11.735 [2024-07-11 15:28:25.179894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:11.735 [2024-07-11 15:28:25.179905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:11.735 [2024-07-11 15:28:25.179919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:11.735 [2024-07-11 15:28:25.179930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:11.735 [2024-07-11 15:28:25.179945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:11.736 [2024-07-11 15:28:25.179956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:11.736 [2024-07-11 15:28:25.179968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:11.736 [2024-07-11 15:28:25.179980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:11.736 [2024-07-11 15:28:25.179992] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:11.736 [2024-07-11 15:28:25.180004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:11.736 [2024-07-11 15:28:25.180018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:11.736 [2024-07-11 15:28:25.180045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:11.736 [2024-07-11 15:28:25.180059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:11.736 [2024-07-11 15:28:25.180071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:11.736 [2024-07-11 15:28:25.180085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.736 [2024-07-11 15:28:25.180096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:11.736 [2024-07-11 15:28:25.180125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.136 ms 00:20:11.736 [2024-07-11 15:28:25.180136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.736 [2024-07-11 15:28:25.180213] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:11.736 [2024-07-11 15:28:25.180239] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:13.635 [2024-07-11 15:28:27.123101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.635 [2024-07-11 15:28:27.123172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:13.635 [2024-07-11 15:28:27.123198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1942.893 ms 00:20:13.635 [2024-07-11 15:28:27.123212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.635 [2024-07-11 15:28:27.155839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.635 [2024-07-11 15:28:27.155900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:13.635 [2024-07-11 15:28:27.155941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.351 ms 00:20:13.635 [2024-07-11 15:28:27.155954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.635 [2024-07-11 15:28:27.156187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.635 [2024-07-11 15:28:27.156209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:13.635 [2024-07-11 15:28:27.156226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:13.635 [2024-07-11 15:28:27.156242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.635 [2024-07-11 15:28:27.193244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.635 [2024-07-11 15:28:27.193299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:13.635 [2024-07-11 15:28:27.193338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.934 ms 00:20:13.635 [2024-07-11 15:28:27.193350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.635 [2024-07-11 15:28:27.193403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.635 [2024-07-11 15:28:27.193441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:13.635 [2024-07-11 15:28:27.193456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:13.635 [2024-07-11 15:28:27.193467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.635 [2024-07-11 15:28:27.193856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.635 [2024-07-11 15:28:27.193875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:13.635 [2024-07-11 15:28:27.193889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:20:13.635 [2024-07-11 15:28:27.193901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.635 [2024-07-11 15:28:27.194119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.635 [2024-07-11 15:28:27.194142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:13.635 [2024-07-11 15:28:27.194161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:20:13.635 [2024-07-11 15:28:27.194173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.635 [2024-07-11 15:28:27.211008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.635 [2024-07-11 15:28:27.211085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:13.635 [2024-07-11 15:28:27.211124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.802 ms 00:20:13.635 [2024-07-11 15:28:27.211137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.635 [2024-07-11 15:28:27.223681] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:13.635 [2024-07-11 15:28:27.226520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.635 [2024-07-11 15:28:27.226557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:13.635 [2024-07-11 15:28:27.226592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.287 ms 00:20:13.635 [2024-07-11 15:28:27.226606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.894 [2024-07-11 15:28:27.295430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.894 [2024-07-11 15:28:27.295506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:13.894 [2024-07-11 15:28:27.295544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.788 ms 00:20:13.894 [2024-07-11 15:28:27.295559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.894 [2024-07-11 15:28:27.295794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.894 [2024-07-11 15:28:27.295820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:13.894 [2024-07-11 15:28:27.295833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:20:13.894 [2024-07-11 15:28:27.295849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.894 [2024-07-11 15:28:27.325992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.894 [2024-07-11 15:28:27.326083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:13.894 [2024-07-11 15:28:27.326104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.081 ms 00:20:13.894 [2024-07-11 15:28:27.326119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.894 [2024-07-11 15:28:27.355402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.894 [2024-07-11 15:28:27.355480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:13.894 [2024-07-11 15:28:27.355499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.218 ms 00:20:13.894 [2024-07-11 15:28:27.355513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.894 [2024-07-11 15:28:27.356235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.894 [2024-07-11 15:28:27.356275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:13.894 [2024-07-11 15:28:27.356293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:20:13.894 [2024-07-11 15:28:27.356311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.894 [2024-07-11 15:28:27.440491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.894 [2024-07-11 15:28:27.440559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:13.894 [2024-07-11 15:28:27.440597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.112 ms 00:20:13.894 [2024-07-11 15:28:27.440615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.894 [2024-07-11 15:28:27.470831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.894 [2024-07-11 15:28:27.470894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:13.894 [2024-07-11 15:28:27.470913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.165 ms 00:20:13.894 [2024-07-11 15:28:27.470926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.894 [2024-07-11 15:28:27.500421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.894 [2024-07-11 15:28:27.500498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:13.894 [2024-07-11 15:28:27.500532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.449 ms 00:20:13.894 [2024-07-11 15:28:27.500545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.152 [2024-07-11 15:28:27.530947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.153 [2024-07-11 15:28:27.530992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:14.153 [2024-07-11 15:28:27.531027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.355 ms 00:20:14.153 [2024-07-11 15:28:27.531090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.153 [2024-07-11 15:28:27.531158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.153 [2024-07-11 15:28:27.531198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:14.153 [2024-07-11 15:28:27.531213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:14.153 [2024-07-11 15:28:27.531229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.153 [2024-07-11 15:28:27.531354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.153 [2024-07-11 15:28:27.531379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:14.153 [2024-07-11 15:28:27.531395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:14.153 [2024-07-11 15:28:27.531425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.153 [2024-07-11 15:28:27.532518] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2366.031 ms, result 0 00:20:14.153 { 00:20:14.153 "name": "ftl0", 00:20:14.153 "uuid": "ae11445a-6c38-4afe-8ce2-49581ac79788" 00:20:14.153 } 00:20:14.153 15:28:27 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:14.153 15:28:27 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:14.412 15:28:27 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:14.412 15:28:27 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:14.673 [2024-07-11 15:28:28.092024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.673 [2024-07-11 15:28:28.092094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:14.673 [2024-07-11 15:28:28.092137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:14.673 [2024-07-11 15:28:28.092149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.673 [2024-07-11 15:28:28.092188] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:14.673 [2024-07-11 15:28:28.095480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.673 [2024-07-11 15:28:28.095519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:14.673 [2024-07-11 15:28:28.095552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.269 ms 00:20:14.673 [2024-07-11 15:28:28.095565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.673 [2024-07-11 15:28:28.095864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.673 [2024-07-11 15:28:28.095896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:14.673 [2024-07-11 15:28:28.095939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:20:14.673 [2024-07-11 15:28:28.095969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.673 [2024-07-11 15:28:28.099193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.673 [2024-07-11 15:28:28.099227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:14.673 [2024-07-11 15:28:28.099258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.200 ms 00:20:14.673 [2024-07-11 15:28:28.099271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.673 [2024-07-11 15:28:28.105566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.673 [2024-07-11 15:28:28.105599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:14.673 [2024-07-11 15:28:28.105632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.271 ms 00:20:14.673 [2024-07-11 15:28:28.105656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.673 [2024-07-11 15:28:28.137230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.673 [2024-07-11 15:28:28.137296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:14.673 [2024-07-11 15:28:28.137317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.493 ms 00:20:14.673 [2024-07-11 15:28:28.137332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.673 [2024-07-11 15:28:28.156039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.673 [2024-07-11 15:28:28.156131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:14.673 [2024-07-11 15:28:28.156151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.626 ms 00:20:14.673 [2024-07-11 15:28:28.156166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.673 [2024-07-11 15:28:28.156362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.673 [2024-07-11 15:28:28.156388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:14.673 [2024-07-11 15:28:28.156402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:20:14.673 [2024-07-11 15:28:28.156416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.673 [2024-07-11 15:28:28.188323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.673 [2024-07-11 15:28:28.188393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:14.673 [2024-07-11 15:28:28.188428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.883 ms 00:20:14.673 [2024-07-11 15:28:28.188441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.673 [2024-07-11 15:28:28.220479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.673 [2024-07-11 15:28:28.220525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:14.673 [2024-07-11 15:28:28.220559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.937 ms 00:20:14.673 [2024-07-11 15:28:28.220574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.673 [2024-07-11 15:28:28.250498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.673 [2024-07-11 15:28:28.250544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:14.673 [2024-07-11 15:28:28.250577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.861 ms 00:20:14.673 [2024-07-11 15:28:28.250591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.673 [2024-07-11 15:28:28.280371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.673 [2024-07-11 15:28:28.280418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:14.673 [2024-07-11 15:28:28.280436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.668 ms 00:20:14.673 [2024-07-11 15:28:28.280450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.673 [2024-07-11 15:28:28.280498] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:14.673 [2024-07-11 15:28:28.280525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.280993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:14.673 [2024-07-11 15:28:28.281389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.281995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.282029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.282048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.282061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.282075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.282088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.282104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.282116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:14.674 [2024-07-11 15:28:28.282140] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:14.674 [2024-07-11 15:28:28.282155] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ae11445a-6c38-4afe-8ce2-49581ac79788 00:20:14.674 [2024-07-11 15:28:28.282170] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:14.674 [2024-07-11 15:28:28.282185] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:14.674 [2024-07-11 15:28:28.282201] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:14.674 [2024-07-11 15:28:28.282213] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:14.674 [2024-07-11 15:28:28.282226] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:14.674 [2024-07-11 15:28:28.282238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:14.674 [2024-07-11 15:28:28.282251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:14.674 [2024-07-11 15:28:28.282262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:14.674 [2024-07-11 15:28:28.282274] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:14.674 [2024-07-11 15:28:28.282286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.674 [2024-07-11 15:28:28.282300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:14.674 [2024-07-11 15:28:28.282313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.790 ms 00:20:14.674 [2024-07-11 15:28:28.282327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.933 [2024-07-11 15:28:28.299378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.933 [2024-07-11 15:28:28.299422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:14.933 [2024-07-11 15:28:28.299473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.985 ms 00:20:14.933 [2024-07-11 15:28:28.299487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.933 [2024-07-11 15:28:28.299926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.933 [2024-07-11 15:28:28.299974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:14.933 [2024-07-11 15:28:28.299990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:20:14.933 [2024-07-11 15:28:28.300008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.933 [2024-07-11 15:28:28.350488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.934 [2024-07-11 15:28:28.350555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:14.934 [2024-07-11 15:28:28.350590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.934 [2024-07-11 15:28:28.350605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.934 [2024-07-11 15:28:28.350684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.934 [2024-07-11 15:28:28.350703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:14.934 [2024-07-11 15:28:28.350716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.934 [2024-07-11 15:28:28.350734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.934 [2024-07-11 15:28:28.350860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.934 [2024-07-11 15:28:28.350886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:14.934 [2024-07-11 15:28:28.350900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.934 [2024-07-11 15:28:28.350914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.934 [2024-07-11 15:28:28.350955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.934 [2024-07-11 15:28:28.350973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:14.934 [2024-07-11 15:28:28.350985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.934 [2024-07-11 15:28:28.350999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.934 [2024-07-11 15:28:28.443429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.934 [2024-07-11 15:28:28.443528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:14.934 [2024-07-11 15:28:28.443547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.934 [2024-07-11 15:28:28.443561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.934 [2024-07-11 15:28:28.520925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.934 [2024-07-11 15:28:28.521005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:14.934 [2024-07-11 15:28:28.521023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.934 [2024-07-11 15:28:28.521077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.934 [2024-07-11 15:28:28.521200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.934 [2024-07-11 15:28:28.521224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:14.934 [2024-07-11 15:28:28.521237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.934 [2024-07-11 15:28:28.521250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.934 [2024-07-11 15:28:28.521312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.934 [2024-07-11 15:28:28.521334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:14.934 [2024-07-11 15:28:28.521347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.934 [2024-07-11 15:28:28.521360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.934 [2024-07-11 15:28:28.521548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.934 [2024-07-11 15:28:28.521571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:14.934 [2024-07-11 15:28:28.521584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.934 [2024-07-11 15:28:28.521599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.934 [2024-07-11 15:28:28.521656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.934 [2024-07-11 15:28:28.521679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:14.934 [2024-07-11 15:28:28.521693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.934 [2024-07-11 15:28:28.521706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.934 [2024-07-11 15:28:28.521759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.934 [2024-07-11 15:28:28.521777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:14.934 [2024-07-11 15:28:28.521790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.934 [2024-07-11 15:28:28.521804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.934 [2024-07-11 15:28:28.521873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.934 [2024-07-11 15:28:28.521896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:14.934 [2024-07-11 15:28:28.521909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.934 [2024-07-11 15:28:28.521923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.934 [2024-07-11 15:28:28.522124] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 430.042 ms, result 0 00:20:14.934 true 00:20:14.934 15:28:28 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 80835 00:20:15.193 15:28:28 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 80835 ']' 00:20:15.193 15:28:28 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 80835 00:20:15.193 15:28:28 ftl.ftl_restore -- common/autotest_common.sh@953 -- # uname 00:20:15.193 15:28:28 ftl.ftl_restore -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:15.193 15:28:28 ftl.ftl_restore -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80835 00:20:15.193 15:28:28 ftl.ftl_restore -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:15.193 15:28:28 ftl.ftl_restore -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:15.193 killing process with pid 80835 00:20:15.193 15:28:28 ftl.ftl_restore -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80835' 00:20:15.193 15:28:28 ftl.ftl_restore -- common/autotest_common.sh@967 -- # kill 80835 00:20:15.193 15:28:28 ftl.ftl_restore -- common/autotest_common.sh@972 -- # wait 80835 00:20:20.459 15:28:33 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:24.643 262144+0 records in 00:20:24.643 262144+0 records out 00:20:24.643 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.71947 s, 228 MB/s 00:20:24.643 15:28:37 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:26.548 15:28:40 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:26.807 [2024-07-11 15:28:40.180928] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:26.807 [2024-07-11 15:28:40.181128] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81067 ] 00:20:26.807 [2024-07-11 15:28:40.339683] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.065 [2024-07-11 15:28:40.520739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.324 [2024-07-11 15:28:40.827251] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:27.324 [2024-07-11 15:28:40.827345] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:27.584 [2024-07-11 15:28:40.986980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.584 [2024-07-11 15:28:40.987071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:27.584 [2024-07-11 15:28:40.987102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:27.584 [2024-07-11 15:28:40.987138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.584 [2024-07-11 15:28:40.987216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.584 [2024-07-11 15:28:40.987237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:27.584 [2024-07-11 15:28:40.987249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:27.584 [2024-07-11 15:28:40.987264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.584 [2024-07-11 15:28:40.987293] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:27.584 [2024-07-11 15:28:40.988262] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:27.584 [2024-07-11 15:28:40.988319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.584 [2024-07-11 15:28:40.988337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:27.584 [2024-07-11 15:28:40.988349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:20:27.584 [2024-07-11 15:28:40.988360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.584 [2024-07-11 15:28:40.989548] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:27.584 [2024-07-11 15:28:41.005178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.584 [2024-07-11 15:28:41.005218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:27.584 [2024-07-11 15:28:41.005250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.632 ms 00:20:27.584 [2024-07-11 15:28:41.005261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.584 [2024-07-11 15:28:41.005327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.584 [2024-07-11 15:28:41.005344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:27.584 [2024-07-11 15:28:41.005359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:27.584 [2024-07-11 15:28:41.005369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.584 [2024-07-11 15:28:41.009573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.584 [2024-07-11 15:28:41.009614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:27.584 [2024-07-11 15:28:41.009645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.122 ms 00:20:27.585 [2024-07-11 15:28:41.009655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.585 [2024-07-11 15:28:41.009744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.585 [2024-07-11 15:28:41.009764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:27.585 [2024-07-11 15:28:41.009775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:27.585 [2024-07-11 15:28:41.009786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.585 [2024-07-11 15:28:41.009857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.585 [2024-07-11 15:28:41.009873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:27.585 [2024-07-11 15:28:41.009884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:27.585 [2024-07-11 15:28:41.009895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.585 [2024-07-11 15:28:41.009925] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:27.585 [2024-07-11 15:28:41.014014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.585 [2024-07-11 15:28:41.014076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:27.585 [2024-07-11 15:28:41.014092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.096 ms 00:20:27.585 [2024-07-11 15:28:41.014103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.585 [2024-07-11 15:28:41.014149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.585 [2024-07-11 15:28:41.014165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:27.585 [2024-07-11 15:28:41.014177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:27.585 [2024-07-11 15:28:41.014188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.585 [2024-07-11 15:28:41.014233] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:27.585 [2024-07-11 15:28:41.014263] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:27.585 [2024-07-11 15:28:41.014308] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:27.585 [2024-07-11 15:28:41.014333] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:27.585 [2024-07-11 15:28:41.014499] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:27.585 [2024-07-11 15:28:41.014514] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:27.585 [2024-07-11 15:28:41.014528] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:27.585 [2024-07-11 15:28:41.014543] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:27.585 [2024-07-11 15:28:41.014555] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:27.585 [2024-07-11 15:28:41.014567] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:27.585 [2024-07-11 15:28:41.014577] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:27.585 [2024-07-11 15:28:41.014588] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:27.585 [2024-07-11 15:28:41.014598] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:27.585 [2024-07-11 15:28:41.014610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.585 [2024-07-11 15:28:41.014625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:27.585 [2024-07-11 15:28:41.014636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:20:27.585 [2024-07-11 15:28:41.014647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.585 [2024-07-11 15:28:41.014732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.585 [2024-07-11 15:28:41.014746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:27.585 [2024-07-11 15:28:41.014757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:27.585 [2024-07-11 15:28:41.014767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.585 [2024-07-11 15:28:41.014882] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:27.585 [2024-07-11 15:28:41.014898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:27.585 [2024-07-11 15:28:41.014913] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:27.585 [2024-07-11 15:28:41.014924] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.585 [2024-07-11 15:28:41.014934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:27.585 [2024-07-11 15:28:41.014943] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:27.585 [2024-07-11 15:28:41.014953] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:27.585 [2024-07-11 15:28:41.014964] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:27.585 [2024-07-11 15:28:41.014973] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:27.585 [2024-07-11 15:28:41.014983] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:27.585 [2024-07-11 15:28:41.014992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:27.585 [2024-07-11 15:28:41.015001] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:27.585 [2024-07-11 15:28:41.015010] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:27.585 [2024-07-11 15:28:41.015020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:27.585 [2024-07-11 15:28:41.015031] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:27.585 [2024-07-11 15:28:41.015057] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.585 [2024-07-11 15:28:41.015067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:27.585 [2024-07-11 15:28:41.015078] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:27.585 [2024-07-11 15:28:41.015087] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.585 [2024-07-11 15:28:41.015098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:27.585 [2024-07-11 15:28:41.015149] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:27.585 [2024-07-11 15:28:41.015167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:27.585 [2024-07-11 15:28:41.015178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:27.585 [2024-07-11 15:28:41.015191] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:27.585 [2024-07-11 15:28:41.015200] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:27.585 [2024-07-11 15:28:41.015209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:27.585 [2024-07-11 15:28:41.015219] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:27.585 [2024-07-11 15:28:41.015228] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:27.585 [2024-07-11 15:28:41.015238] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:27.585 [2024-07-11 15:28:41.015247] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:27.585 [2024-07-11 15:28:41.015256] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:27.585 [2024-07-11 15:28:41.015266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:27.585 [2024-07-11 15:28:41.015275] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:27.585 [2024-07-11 15:28:41.015285] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:27.585 [2024-07-11 15:28:41.015294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:27.585 [2024-07-11 15:28:41.015304] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:27.585 [2024-07-11 15:28:41.015313] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:27.585 [2024-07-11 15:28:41.015323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:27.585 [2024-07-11 15:28:41.015332] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:27.585 [2024-07-11 15:28:41.015341] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.585 [2024-07-11 15:28:41.015351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:27.585 [2024-07-11 15:28:41.015360] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:27.585 [2024-07-11 15:28:41.015386] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.585 [2024-07-11 15:28:41.015412] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:27.585 [2024-07-11 15:28:41.015423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:27.586 [2024-07-11 15:28:41.015434] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:27.586 [2024-07-11 15:28:41.015445] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:27.586 [2024-07-11 15:28:41.015457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:27.586 [2024-07-11 15:28:41.015468] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:27.586 [2024-07-11 15:28:41.015478] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:27.586 [2024-07-11 15:28:41.015489] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:27.586 [2024-07-11 15:28:41.015499] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:27.586 [2024-07-11 15:28:41.015509] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:27.586 [2024-07-11 15:28:41.015521] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:27.586 [2024-07-11 15:28:41.015535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:27.586 [2024-07-11 15:28:41.015547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:27.586 [2024-07-11 15:28:41.015559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:27.586 [2024-07-11 15:28:41.015570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:27.586 [2024-07-11 15:28:41.015581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:27.586 [2024-07-11 15:28:41.015592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:27.586 [2024-07-11 15:28:41.015603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:27.586 [2024-07-11 15:28:41.015614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:27.586 [2024-07-11 15:28:41.015625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:27.586 [2024-07-11 15:28:41.015638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:27.586 [2024-07-11 15:28:41.015649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:27.586 [2024-07-11 15:28:41.015660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:27.586 [2024-07-11 15:28:41.015671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:27.586 [2024-07-11 15:28:41.015682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:27.586 [2024-07-11 15:28:41.015694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:27.586 [2024-07-11 15:28:41.015705] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:27.586 [2024-07-11 15:28:41.015717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:27.586 [2024-07-11 15:28:41.015729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:27.586 [2024-07-11 15:28:41.015740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:27.586 [2024-07-11 15:28:41.015752] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:27.586 [2024-07-11 15:28:41.015763] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:27.586 [2024-07-11 15:28:41.015775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.586 [2024-07-11 15:28:41.015792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:27.586 [2024-07-11 15:28:41.015803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:20:27.586 [2024-07-11 15:28:41.015815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.586 [2024-07-11 15:28:41.054050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.586 [2024-07-11 15:28:41.054117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:27.586 [2024-07-11 15:28:41.054139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.124 ms 00:20:27.586 [2024-07-11 15:28:41.054150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.586 [2024-07-11 15:28:41.054269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.586 [2024-07-11 15:28:41.054286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:27.586 [2024-07-11 15:28:41.054299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:27.586 [2024-07-11 15:28:41.054310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.586 [2024-07-11 15:28:41.091155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.586 [2024-07-11 15:28:41.091208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:27.586 [2024-07-11 15:28:41.091241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.718 ms 00:20:27.586 [2024-07-11 15:28:41.091252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.586 [2024-07-11 15:28:41.091309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.586 [2024-07-11 15:28:41.091323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:27.586 [2024-07-11 15:28:41.091335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:27.586 [2024-07-11 15:28:41.091345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.586 [2024-07-11 15:28:41.091719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.586 [2024-07-11 15:28:41.091737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:27.586 [2024-07-11 15:28:41.091750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:20:27.586 [2024-07-11 15:28:41.091760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.586 [2024-07-11 15:28:41.091917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.586 [2024-07-11 15:28:41.091934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:27.586 [2024-07-11 15:28:41.091945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:20:27.586 [2024-07-11 15:28:41.091955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.586 [2024-07-11 15:28:41.107309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.586 [2024-07-11 15:28:41.107347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:27.586 [2024-07-11 15:28:41.107378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.330 ms 00:20:27.586 [2024-07-11 15:28:41.107389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.586 [2024-07-11 15:28:41.123206] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:27.586 [2024-07-11 15:28:41.123249] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:27.586 [2024-07-11 15:28:41.123286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.586 [2024-07-11 15:28:41.123297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:27.586 [2024-07-11 15:28:41.123309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.780 ms 00:20:27.586 [2024-07-11 15:28:41.123319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.586 [2024-07-11 15:28:41.151746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.586 [2024-07-11 15:28:41.151788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:27.586 [2024-07-11 15:28:41.151820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.384 ms 00:20:27.586 [2024-07-11 15:28:41.151831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.586 [2024-07-11 15:28:41.166878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.586 [2024-07-11 15:28:41.166917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:27.586 [2024-07-11 15:28:41.166948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.974 ms 00:20:27.586 [2024-07-11 15:28:41.166958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.586 [2024-07-11 15:28:41.181893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.586 [2024-07-11 15:28:41.181931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:27.586 [2024-07-11 15:28:41.181961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.894 ms 00:20:27.586 [2024-07-11 15:28:41.181971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.586 [2024-07-11 15:28:41.182852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.586 [2024-07-11 15:28:41.182893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:27.586 [2024-07-11 15:28:41.182908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:20:27.587 [2024-07-11 15:28:41.182920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.846 [2024-07-11 15:28:41.259955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.846 [2024-07-11 15:28:41.260013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:27.846 [2024-07-11 15:28:41.260088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.011 ms 00:20:27.846 [2024-07-11 15:28:41.260103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.846 [2024-07-11 15:28:41.272887] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:27.846 [2024-07-11 15:28:41.275577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.846 [2024-07-11 15:28:41.275636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:27.846 [2024-07-11 15:28:41.275670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.408 ms 00:20:27.846 [2024-07-11 15:28:41.275681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.846 [2024-07-11 15:28:41.275794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.846 [2024-07-11 15:28:41.275813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:27.846 [2024-07-11 15:28:41.275826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:27.846 [2024-07-11 15:28:41.275837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.846 [2024-07-11 15:28:41.275919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.846 [2024-07-11 15:28:41.275937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:27.846 [2024-07-11 15:28:41.275954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:27.846 [2024-07-11 15:28:41.275965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.846 [2024-07-11 15:28:41.275995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.846 [2024-07-11 15:28:41.276008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:27.846 [2024-07-11 15:28:41.276034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:27.846 [2024-07-11 15:28:41.276060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.846 [2024-07-11 15:28:41.276126] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:27.846 [2024-07-11 15:28:41.276148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.846 [2024-07-11 15:28:41.276159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:27.846 [2024-07-11 15:28:41.276170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:27.846 [2024-07-11 15:28:41.276194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.846 [2024-07-11 15:28:41.306181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.846 [2024-07-11 15:28:41.306225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:27.846 [2024-07-11 15:28:41.306243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.958 ms 00:20:27.846 [2024-07-11 15:28:41.306255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.846 [2024-07-11 15:28:41.306336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.846 [2024-07-11 15:28:41.306354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:27.846 [2024-07-11 15:28:41.306377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:27.846 [2024-07-11 15:28:41.306388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.846 [2024-07-11 15:28:41.307506] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 320.017 ms, result 0 00:21:08.994  Copying: 25/1024 [MB] (25 MBps) Copying: 52/1024 [MB] (26 MBps) Copying: 78/1024 [MB] (26 MBps) Copying: 102/1024 [MB] (24 MBps) Copying: 127/1024 [MB] (24 MBps) Copying: 152/1024 [MB] (25 MBps) Copying: 177/1024 [MB] (24 MBps) Copying: 202/1024 [MB] (24 MBps) Copying: 227/1024 [MB] (25 MBps) Copying: 253/1024 [MB] (25 MBps) Copying: 277/1024 [MB] (24 MBps) Copying: 302/1024 [MB] (24 MBps) Copying: 327/1024 [MB] (25 MBps) Copying: 351/1024 [MB] (23 MBps) Copying: 375/1024 [MB] (24 MBps) Copying: 400/1024 [MB] (25 MBps) Copying: 426/1024 [MB] (25 MBps) Copying: 449/1024 [MB] (23 MBps) Copying: 474/1024 [MB] (24 MBps) Copying: 500/1024 [MB] (26 MBps) Copying: 525/1024 [MB] (25 MBps) Copying: 550/1024 [MB] (25 MBps) Copying: 577/1024 [MB] (26 MBps) Copying: 602/1024 [MB] (24 MBps) Copying: 626/1024 [MB] (24 MBps) Copying: 651/1024 [MB] (25 MBps) Copying: 676/1024 [MB] (25 MBps) Copying: 701/1024 [MB] (24 MBps) Copying: 726/1024 [MB] (24 MBps) Copying: 750/1024 [MB] (24 MBps) Copying: 775/1024 [MB] (25 MBps) Copying: 800/1024 [MB] (24 MBps) Copying: 825/1024 [MB] (24 MBps) Copying: 849/1024 [MB] (24 MBps) Copying: 875/1024 [MB] (25 MBps) Copying: 900/1024 [MB] (24 MBps) Copying: 924/1024 [MB] (24 MBps) Copying: 948/1024 [MB] (24 MBps) Copying: 973/1024 [MB] (24 MBps) Copying: 998/1024 [MB] (24 MBps) Copying: 1022/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-11 15:29:22.376633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.994 [2024-07-11 15:29:22.376853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:08.994 [2024-07-11 15:29:22.376999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:08.994 [2024-07-11 15:29:22.377153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.994 [2024-07-11 15:29:22.377245] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:08.994 [2024-07-11 15:29:22.380741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.994 [2024-07-11 15:29:22.380879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:08.994 [2024-07-11 15:29:22.381005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.335 ms 00:21:08.994 [2024-07-11 15:29:22.381071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.994 [2024-07-11 15:29:22.382698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.994 [2024-07-11 15:29:22.382888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:08.994 [2024-07-11 15:29:22.382936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.487 ms 00:21:08.994 [2024-07-11 15:29:22.382948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.994 [2024-07-11 15:29:22.399368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.994 [2024-07-11 15:29:22.399425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:08.994 [2024-07-11 15:29:22.399458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.396 ms 00:21:08.994 [2024-07-11 15:29:22.399469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.994 [2024-07-11 15:29:22.405912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.994 [2024-07-11 15:29:22.405943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:08.994 [2024-07-11 15:29:22.405979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.405 ms 00:21:08.994 [2024-07-11 15:29:22.405989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.994 [2024-07-11 15:29:22.436085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.994 [2024-07-11 15:29:22.436125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:08.994 [2024-07-11 15:29:22.436157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.982 ms 00:21:08.994 [2024-07-11 15:29:22.436167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.994 [2024-07-11 15:29:22.454108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.994 [2024-07-11 15:29:22.454151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:08.994 [2024-07-11 15:29:22.454169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.902 ms 00:21:08.994 [2024-07-11 15:29:22.454181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.994 [2024-07-11 15:29:22.454352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.994 [2024-07-11 15:29:22.454387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:08.994 [2024-07-11 15:29:22.454399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:21:08.994 [2024-07-11 15:29:22.454410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.994 [2024-07-11 15:29:22.485348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.994 [2024-07-11 15:29:22.485387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:08.994 [2024-07-11 15:29:22.485419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.901 ms 00:21:08.995 [2024-07-11 15:29:22.485429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.995 [2024-07-11 15:29:22.515643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.995 [2024-07-11 15:29:22.515680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:08.995 [2024-07-11 15:29:22.515713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.158 ms 00:21:08.995 [2024-07-11 15:29:22.515722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.995 [2024-07-11 15:29:22.545335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.995 [2024-07-11 15:29:22.545374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:08.995 [2024-07-11 15:29:22.545406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.573 ms 00:21:08.995 [2024-07-11 15:29:22.545431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.995 [2024-07-11 15:29:22.575092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.995 [2024-07-11 15:29:22.575138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:08.995 [2024-07-11 15:29:22.575171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.578 ms 00:21:08.995 [2024-07-11 15:29:22.575182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.995 [2024-07-11 15:29:22.575237] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:08.995 [2024-07-11 15:29:22.575273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:08.995 [2024-07-11 15:29:22.575989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.575999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:08.996 [2024-07-11 15:29:22.576533] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:08.996 [2024-07-11 15:29:22.576544] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ae11445a-6c38-4afe-8ce2-49581ac79788 00:21:08.996 [2024-07-11 15:29:22.576555] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:08.996 [2024-07-11 15:29:22.576566] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:08.996 [2024-07-11 15:29:22.576576] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:08.996 [2024-07-11 15:29:22.576594] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:08.996 [2024-07-11 15:29:22.576604] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:08.996 [2024-07-11 15:29:22.576615] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:08.996 [2024-07-11 15:29:22.576626] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:08.996 [2024-07-11 15:29:22.576635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:08.996 [2024-07-11 15:29:22.576645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:08.996 [2024-07-11 15:29:22.576656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.996 [2024-07-11 15:29:22.576667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:08.996 [2024-07-11 15:29:22.576679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.436 ms 00:21:08.996 [2024-07-11 15:29:22.576690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.996 [2024-07-11 15:29:22.592698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.996 [2024-07-11 15:29:22.592739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:08.996 [2024-07-11 15:29:22.592770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.952 ms 00:21:08.996 [2024-07-11 15:29:22.592791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.996 [2024-07-11 15:29:22.593243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.996 [2024-07-11 15:29:22.593260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:08.996 [2024-07-11 15:29:22.593272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:21:08.996 [2024-07-11 15:29:22.593283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.256 [2024-07-11 15:29:22.629796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.256 [2024-07-11 15:29:22.629862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:09.256 [2024-07-11 15:29:22.629895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.256 [2024-07-11 15:29:22.629905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.256 [2024-07-11 15:29:22.629970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.256 [2024-07-11 15:29:22.629985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:09.256 [2024-07-11 15:29:22.629996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.256 [2024-07-11 15:29:22.630006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.256 [2024-07-11 15:29:22.630141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.256 [2024-07-11 15:29:22.630167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:09.256 [2024-07-11 15:29:22.630179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.256 [2024-07-11 15:29:22.630190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.256 [2024-07-11 15:29:22.630212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.256 [2024-07-11 15:29:22.630225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:09.256 [2024-07-11 15:29:22.630237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.256 [2024-07-11 15:29:22.630247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.256 [2024-07-11 15:29:22.725520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.256 [2024-07-11 15:29:22.725592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:09.256 [2024-07-11 15:29:22.725609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.256 [2024-07-11 15:29:22.725620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.256 [2024-07-11 15:29:22.807884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.256 [2024-07-11 15:29:22.807947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:09.256 [2024-07-11 15:29:22.807981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.256 [2024-07-11 15:29:22.807991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.256 [2024-07-11 15:29:22.808097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.256 [2024-07-11 15:29:22.808115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:09.256 [2024-07-11 15:29:22.808127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.256 [2024-07-11 15:29:22.808144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.256 [2024-07-11 15:29:22.808204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.256 [2024-07-11 15:29:22.808218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:09.256 [2024-07-11 15:29:22.808230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.256 [2024-07-11 15:29:22.808240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.256 [2024-07-11 15:29:22.808354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.256 [2024-07-11 15:29:22.808372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:09.256 [2024-07-11 15:29:22.808383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.256 [2024-07-11 15:29:22.808400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.256 [2024-07-11 15:29:22.808467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.256 [2024-07-11 15:29:22.808484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:09.256 [2024-07-11 15:29:22.808496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.256 [2024-07-11 15:29:22.808507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.256 [2024-07-11 15:29:22.808550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.256 [2024-07-11 15:29:22.808566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:09.256 [2024-07-11 15:29:22.808577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.256 [2024-07-11 15:29:22.808588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.256 [2024-07-11 15:29:22.808643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.256 [2024-07-11 15:29:22.808659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:09.256 [2024-07-11 15:29:22.808671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.256 [2024-07-11 15:29:22.808682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.256 [2024-07-11 15:29:22.808817] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 432.146 ms, result 0 00:21:10.634 00:21:10.634 00:21:10.634 15:29:23 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:10.634 [2024-07-11 15:29:23.979263] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:10.634 [2024-07-11 15:29:23.979492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81509 ] 00:21:10.634 [2024-07-11 15:29:24.151474] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.893 [2024-07-11 15:29:24.330642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.152 [2024-07-11 15:29:24.629602] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:11.152 [2024-07-11 15:29:24.629694] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:11.414 [2024-07-11 15:29:24.789186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.414 [2024-07-11 15:29:24.789246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:11.414 [2024-07-11 15:29:24.789282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:11.414 [2024-07-11 15:29:24.789293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.414 [2024-07-11 15:29:24.789360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.414 [2024-07-11 15:29:24.789380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:11.414 [2024-07-11 15:29:24.789392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:21:11.414 [2024-07-11 15:29:24.789405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.414 [2024-07-11 15:29:24.789433] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:11.414 [2024-07-11 15:29:24.790423] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:11.414 [2024-07-11 15:29:24.790468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.414 [2024-07-11 15:29:24.790488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:11.414 [2024-07-11 15:29:24.790501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.041 ms 00:21:11.414 [2024-07-11 15:29:24.790513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.414 [2024-07-11 15:29:24.791725] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:11.414 [2024-07-11 15:29:24.807821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.414 [2024-07-11 15:29:24.807860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:11.414 [2024-07-11 15:29:24.807893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.097 ms 00:21:11.414 [2024-07-11 15:29:24.807904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.414 [2024-07-11 15:29:24.807971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.414 [2024-07-11 15:29:24.807990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:11.414 [2024-07-11 15:29:24.808005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:11.414 [2024-07-11 15:29:24.808015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.414 [2024-07-11 15:29:24.812230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.414 [2024-07-11 15:29:24.812271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:11.414 [2024-07-11 15:29:24.812303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.093 ms 00:21:11.414 [2024-07-11 15:29:24.812313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.414 [2024-07-11 15:29:24.812418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.414 [2024-07-11 15:29:24.812439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:11.414 [2024-07-11 15:29:24.812467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:11.414 [2024-07-11 15:29:24.812494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.414 [2024-07-11 15:29:24.812559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.414 [2024-07-11 15:29:24.812577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:11.414 [2024-07-11 15:29:24.812590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:11.414 [2024-07-11 15:29:24.812601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.414 [2024-07-11 15:29:24.812635] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:11.414 [2024-07-11 15:29:24.816723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.414 [2024-07-11 15:29:24.816756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:11.414 [2024-07-11 15:29:24.816787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.097 ms 00:21:11.414 [2024-07-11 15:29:24.816797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.414 [2024-07-11 15:29:24.816839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.414 [2024-07-11 15:29:24.816853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:11.414 [2024-07-11 15:29:24.816864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:11.414 [2024-07-11 15:29:24.816874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.414 [2024-07-11 15:29:24.816914] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:11.414 [2024-07-11 15:29:24.816942] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:11.414 [2024-07-11 15:29:24.816981] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:11.414 [2024-07-11 15:29:24.817002] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:11.414 [2024-07-11 15:29:24.817148] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:11.414 [2024-07-11 15:29:24.817166] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:11.414 [2024-07-11 15:29:24.817180] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:11.414 [2024-07-11 15:29:24.817195] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:11.414 [2024-07-11 15:29:24.817207] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:11.414 [2024-07-11 15:29:24.817219] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:11.414 [2024-07-11 15:29:24.817229] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:11.414 [2024-07-11 15:29:24.817240] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:11.414 [2024-07-11 15:29:24.817250] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:11.414 [2024-07-11 15:29:24.817261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.414 [2024-07-11 15:29:24.817292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:11.414 [2024-07-11 15:29:24.817321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:21:11.414 [2024-07-11 15:29:24.817332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.414 [2024-07-11 15:29:24.817421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.414 [2024-07-11 15:29:24.817435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:11.414 [2024-07-11 15:29:24.817447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:11.414 [2024-07-11 15:29:24.817458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.414 [2024-07-11 15:29:24.817567] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:11.414 [2024-07-11 15:29:24.817589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:11.414 [2024-07-11 15:29:24.817608] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:11.414 [2024-07-11 15:29:24.817620] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.414 [2024-07-11 15:29:24.817631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:11.414 [2024-07-11 15:29:24.817642] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:11.414 [2024-07-11 15:29:24.817653] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:11.414 [2024-07-11 15:29:24.817664] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:11.414 [2024-07-11 15:29:24.817674] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:11.415 [2024-07-11 15:29:24.817685] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:11.415 [2024-07-11 15:29:24.817696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:11.415 [2024-07-11 15:29:24.817706] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:11.415 [2024-07-11 15:29:24.817716] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:11.415 [2024-07-11 15:29:24.817727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:11.415 [2024-07-11 15:29:24.817738] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:11.415 [2024-07-11 15:29:24.817750] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.415 [2024-07-11 15:29:24.817761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:11.415 [2024-07-11 15:29:24.817772] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:11.415 [2024-07-11 15:29:24.817783] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.415 [2024-07-11 15:29:24.817794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:11.415 [2024-07-11 15:29:24.817817] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:11.415 [2024-07-11 15:29:24.817828] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.415 [2024-07-11 15:29:24.817853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:11.415 [2024-07-11 15:29:24.817864] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:11.415 [2024-07-11 15:29:24.817873] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.415 [2024-07-11 15:29:24.817883] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:11.415 [2024-07-11 15:29:24.817893] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:11.415 [2024-07-11 15:29:24.817903] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.415 [2024-07-11 15:29:24.817913] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:11.415 [2024-07-11 15:29:24.817923] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:11.415 [2024-07-11 15:29:24.817933] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.415 [2024-07-11 15:29:24.817944] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:11.415 [2024-07-11 15:29:24.817954] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:11.415 [2024-07-11 15:29:24.817964] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:11.415 [2024-07-11 15:29:24.817974] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:11.415 [2024-07-11 15:29:24.817984] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:11.415 [2024-07-11 15:29:24.817994] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:11.415 [2024-07-11 15:29:24.818005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:11.415 [2024-07-11 15:29:24.818057] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:11.415 [2024-07-11 15:29:24.818070] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.415 [2024-07-11 15:29:24.818081] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:11.415 [2024-07-11 15:29:24.818092] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:11.415 [2024-07-11 15:29:24.818102] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.415 [2024-07-11 15:29:24.818112] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:11.415 [2024-07-11 15:29:24.818124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:11.415 [2024-07-11 15:29:24.818135] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:11.415 [2024-07-11 15:29:24.818149] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.415 [2024-07-11 15:29:24.818161] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:11.415 [2024-07-11 15:29:24.818172] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:11.415 [2024-07-11 15:29:24.818182] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:11.415 [2024-07-11 15:29:24.818193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:11.415 [2024-07-11 15:29:24.818203] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:11.415 [2024-07-11 15:29:24.818214] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:11.415 [2024-07-11 15:29:24.818226] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:11.415 [2024-07-11 15:29:24.818240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:11.415 [2024-07-11 15:29:24.818252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:11.415 [2024-07-11 15:29:24.818264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:11.415 [2024-07-11 15:29:24.818275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:11.415 [2024-07-11 15:29:24.818287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:11.415 [2024-07-11 15:29:24.818298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:11.415 [2024-07-11 15:29:24.818309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:11.415 [2024-07-11 15:29:24.818320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:11.415 [2024-07-11 15:29:24.818332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:11.415 [2024-07-11 15:29:24.818343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:11.415 [2024-07-11 15:29:24.818354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:11.415 [2024-07-11 15:29:24.818366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:11.415 [2024-07-11 15:29:24.818377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:11.415 [2024-07-11 15:29:24.818388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:11.415 [2024-07-11 15:29:24.818400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:11.415 [2024-07-11 15:29:24.818411] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:11.415 [2024-07-11 15:29:24.818424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:11.415 [2024-07-11 15:29:24.818436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:11.415 [2024-07-11 15:29:24.818448] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:11.415 [2024-07-11 15:29:24.818460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:11.415 [2024-07-11 15:29:24.818471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:11.415 [2024-07-11 15:29:24.818484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.415 [2024-07-11 15:29:24.818500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:11.415 [2024-07-11 15:29:24.818513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:21:11.415 [2024-07-11 15:29:24.818524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.415 [2024-07-11 15:29:24.857505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.415 [2024-07-11 15:29:24.857598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:11.415 [2024-07-11 15:29:24.857636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.891 ms 00:21:11.415 [2024-07-11 15:29:24.857664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.415 [2024-07-11 15:29:24.857794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.415 [2024-07-11 15:29:24.857810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:11.415 [2024-07-11 15:29:24.857823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:21:11.415 [2024-07-11 15:29:24.857834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.415 [2024-07-11 15:29:24.894248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.415 [2024-07-11 15:29:24.894323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:11.415 [2024-07-11 15:29:24.894343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.326 ms 00:21:11.415 [2024-07-11 15:29:24.894356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.415 [2024-07-11 15:29:24.894462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.415 [2024-07-11 15:29:24.894492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:11.415 [2024-07-11 15:29:24.894503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:11.415 [2024-07-11 15:29:24.894513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.415 [2024-07-11 15:29:24.894875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.415 [2024-07-11 15:29:24.894893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:11.415 [2024-07-11 15:29:24.894905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:21:11.415 [2024-07-11 15:29:24.894915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.415 [2024-07-11 15:29:24.895058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.415 [2024-07-11 15:29:24.895077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:11.415 [2024-07-11 15:29:24.895088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:21:11.415 [2024-07-11 15:29:24.895122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.415 [2024-07-11 15:29:24.910303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.415 [2024-07-11 15:29:24.910360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:11.415 [2024-07-11 15:29:24.910394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.154 ms 00:21:11.415 [2024-07-11 15:29:24.910406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.415 [2024-07-11 15:29:24.926133] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:11.415 [2024-07-11 15:29:24.926178] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:11.415 [2024-07-11 15:29:24.926213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.415 [2024-07-11 15:29:24.926225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:11.415 [2024-07-11 15:29:24.926239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.661 ms 00:21:11.415 [2024-07-11 15:29:24.926250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.415 [2024-07-11 15:29:24.955131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.415 [2024-07-11 15:29:24.955173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:11.415 [2024-07-11 15:29:24.955206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.834 ms 00:21:11.415 [2024-07-11 15:29:24.955223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.416 [2024-07-11 15:29:24.970592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.416 [2024-07-11 15:29:24.970645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:11.416 [2024-07-11 15:29:24.970677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.316 ms 00:21:11.416 [2024-07-11 15:29:24.970687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.416 [2024-07-11 15:29:24.985718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.416 [2024-07-11 15:29:24.985756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:11.416 [2024-07-11 15:29:24.985789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.973 ms 00:21:11.416 [2024-07-11 15:29:24.985800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.416 [2024-07-11 15:29:24.986719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.416 [2024-07-11 15:29:24.986746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:11.416 [2024-07-11 15:29:24.986760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:21:11.416 [2024-07-11 15:29:24.986771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.675 [2024-07-11 15:29:25.056179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.675 [2024-07-11 15:29:25.056246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:11.675 [2024-07-11 15:29:25.056283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.384 ms 00:21:11.675 [2024-07-11 15:29:25.056294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.675 [2024-07-11 15:29:25.068101] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:11.675 [2024-07-11 15:29:25.070675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.675 [2024-07-11 15:29:25.070708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:11.675 [2024-07-11 15:29:25.070723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.310 ms 00:21:11.675 [2024-07-11 15:29:25.070734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.675 [2024-07-11 15:29:25.070827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.675 [2024-07-11 15:29:25.070846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:11.675 [2024-07-11 15:29:25.070858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:11.675 [2024-07-11 15:29:25.070867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.675 [2024-07-11 15:29:25.070948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.675 [2024-07-11 15:29:25.070970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:11.675 [2024-07-11 15:29:25.070981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:11.675 [2024-07-11 15:29:25.070991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.675 [2024-07-11 15:29:25.071015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.675 [2024-07-11 15:29:25.071063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:11.675 [2024-07-11 15:29:25.071075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:11.675 [2024-07-11 15:29:25.071085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.675 [2024-07-11 15:29:25.071150] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:11.675 [2024-07-11 15:29:25.071184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.675 [2024-07-11 15:29:25.071195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:11.675 [2024-07-11 15:29:25.071211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:11.675 [2024-07-11 15:29:25.071222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.675 [2024-07-11 15:29:25.103710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.675 [2024-07-11 15:29:25.103775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:11.675 [2024-07-11 15:29:25.103796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.459 ms 00:21:11.675 [2024-07-11 15:29:25.103807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.675 [2024-07-11 15:29:25.103928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.675 [2024-07-11 15:29:25.103954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:11.675 [2024-07-11 15:29:25.103966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:11.676 [2024-07-11 15:29:25.103976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.676 [2024-07-11 15:29:25.105231] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 315.452 ms, result 0 00:21:52.687  Copying: 24/1024 [MB] (24 MBps) Copying: 49/1024 [MB] (25 MBps) Copying: 75/1024 [MB] (25 MBps) Copying: 100/1024 [MB] (24 MBps) Copying: 124/1024 [MB] (24 MBps) Copying: 149/1024 [MB] (25 MBps) Copying: 174/1024 [MB] (25 MBps) Copying: 200/1024 [MB] (25 MBps) Copying: 225/1024 [MB] (24 MBps) Copying: 249/1024 [MB] (24 MBps) Copying: 274/1024 [MB] (24 MBps) Copying: 298/1024 [MB] (24 MBps) Copying: 323/1024 [MB] (24 MBps) Copying: 348/1024 [MB] (24 MBps) Copying: 372/1024 [MB] (24 MBps) Copying: 397/1024 [MB] (24 MBps) Copying: 421/1024 [MB] (24 MBps) Copying: 447/1024 [MB] (25 MBps) Copying: 472/1024 [MB] (25 MBps) Copying: 498/1024 [MB] (25 MBps) Copying: 523/1024 [MB] (25 MBps) Copying: 547/1024 [MB] (23 MBps) Copying: 572/1024 [MB] (24 MBps) Copying: 597/1024 [MB] (25 MBps) Copying: 622/1024 [MB] (25 MBps) Copying: 648/1024 [MB] (25 MBps) Copying: 672/1024 [MB] (24 MBps) Copying: 695/1024 [MB] (23 MBps) Copying: 719/1024 [MB] (24 MBps) Copying: 745/1024 [MB] (25 MBps) Copying: 770/1024 [MB] (25 MBps) Copying: 796/1024 [MB] (25 MBps) Copying: 821/1024 [MB] (25 MBps) Copying: 847/1024 [MB] (25 MBps) Copying: 873/1024 [MB] (26 MBps) Copying: 900/1024 [MB] (27 MBps) Copying: 928/1024 [MB] (27 MBps) Copying: 956/1024 [MB] (27 MBps) Copying: 983/1024 [MB] (27 MBps) Copying: 1009/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-11 15:30:06.143732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.687 [2024-07-11 15:30:06.143828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:52.687 [2024-07-11 15:30:06.143855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:52.687 [2024-07-11 15:30:06.143870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.687 [2024-07-11 15:30:06.143910] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:52.687 [2024-07-11 15:30:06.148940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.687 [2024-07-11 15:30:06.149122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:52.687 [2024-07-11 15:30:06.149264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.000 ms 00:21:52.687 [2024-07-11 15:30:06.149325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.687 [2024-07-11 15:30:06.149953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.687 [2024-07-11 15:30:06.150168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:52.687 [2024-07-11 15:30:06.150306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:21:52.687 [2024-07-11 15:30:06.150519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.687 [2024-07-11 15:30:06.155423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.687 [2024-07-11 15:30:06.155608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:52.687 [2024-07-11 15:30:06.155745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.811 ms 00:21:52.687 [2024-07-11 15:30:06.155816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.687 [2024-07-11 15:30:06.165108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.687 [2024-07-11 15:30:06.165335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:52.687 [2024-07-11 15:30:06.165505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.091 ms 00:21:52.687 [2024-07-11 15:30:06.165677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.687 [2024-07-11 15:30:06.205540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.687 [2024-07-11 15:30:06.205768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:52.687 [2024-07-11 15:30:06.205915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.677 ms 00:21:52.687 [2024-07-11 15:30:06.205975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.687 [2024-07-11 15:30:06.229337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.687 [2024-07-11 15:30:06.229570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:52.687 [2024-07-11 15:30:06.229720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.240 ms 00:21:52.687 [2024-07-11 15:30:06.229790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.687 [2024-07-11 15:30:06.230049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.687 [2024-07-11 15:30:06.230124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:52.687 [2024-07-11 15:30:06.230174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:21:52.687 [2024-07-11 15:30:06.230285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.687 [2024-07-11 15:30:06.269674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.687 [2024-07-11 15:30:06.269906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:52.687 [2024-07-11 15:30:06.270095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.329 ms 00:21:52.687 [2024-07-11 15:30:06.270159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.945 [2024-07-11 15:30:06.308712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.945 [2024-07-11 15:30:06.308948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:52.945 [2024-07-11 15:30:06.308981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.464 ms 00:21:52.945 [2024-07-11 15:30:06.309006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.945 [2024-07-11 15:30:06.346968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.945 [2024-07-11 15:30:06.347042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:52.945 [2024-07-11 15:30:06.347113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.878 ms 00:21:52.945 [2024-07-11 15:30:06.347129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.945 [2024-07-11 15:30:06.384719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.945 [2024-07-11 15:30:06.384771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:52.945 [2024-07-11 15:30:06.384801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.469 ms 00:21:52.945 [2024-07-11 15:30:06.384814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.945 [2024-07-11 15:30:06.384891] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:52.945 [2024-07-11 15:30:06.384924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:52.945 [2024-07-11 15:30:06.384942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:52.945 [2024-07-11 15:30:06.384957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:52.945 [2024-07-11 15:30:06.384972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:52.945 [2024-07-11 15:30:06.384987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:52.945 [2024-07-11 15:30:06.385011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:52.945 [2024-07-11 15:30:06.385061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:52.945 [2024-07-11 15:30:06.385078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:52.945 [2024-07-11 15:30:06.385104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:52.945 [2024-07-11 15:30:06.385119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:52.945 [2024-07-11 15:30:06.385134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:52.945 [2024-07-11 15:30:06.385149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:52.945 [2024-07-11 15:30:06.385163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.385996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:52.946 [2024-07-11 15:30:06.386599] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:52.946 [2024-07-11 15:30:06.386615] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ae11445a-6c38-4afe-8ce2-49581ac79788 00:21:52.946 [2024-07-11 15:30:06.386631] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:52.946 [2024-07-11 15:30:06.386646] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:52.946 [2024-07-11 15:30:06.386667] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:52.946 [2024-07-11 15:30:06.386681] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:52.946 [2024-07-11 15:30:06.386694] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:52.946 [2024-07-11 15:30:06.386708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:52.946 [2024-07-11 15:30:06.386721] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:52.947 [2024-07-11 15:30:06.386734] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:52.947 [2024-07-11 15:30:06.386746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:52.947 [2024-07-11 15:30:06.386760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.947 [2024-07-11 15:30:06.386774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:52.947 [2024-07-11 15:30:06.386789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.885 ms 00:21:52.947 [2024-07-11 15:30:06.386803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.947 [2024-07-11 15:30:06.407068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.947 [2024-07-11 15:30:06.407120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:52.947 [2024-07-11 15:30:06.407167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.210 ms 00:21:52.947 [2024-07-11 15:30:06.407182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.947 [2024-07-11 15:30:06.407698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:52.947 [2024-07-11 15:30:06.407733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:52.947 [2024-07-11 15:30:06.407750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:21:52.947 [2024-07-11 15:30:06.407764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.947 [2024-07-11 15:30:06.452860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.947 [2024-07-11 15:30:06.452921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:52.947 [2024-07-11 15:30:06.452951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.947 [2024-07-11 15:30:06.452965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.947 [2024-07-11 15:30:06.453100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.947 [2024-07-11 15:30:06.453128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:52.947 [2024-07-11 15:30:06.453144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.947 [2024-07-11 15:30:06.453168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.947 [2024-07-11 15:30:06.453284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.947 [2024-07-11 15:30:06.453307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:52.947 [2024-07-11 15:30:06.453323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.947 [2024-07-11 15:30:06.453336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.947 [2024-07-11 15:30:06.453362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.947 [2024-07-11 15:30:06.453378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:52.947 [2024-07-11 15:30:06.453392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.947 [2024-07-11 15:30:06.453405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:52.947 [2024-07-11 15:30:06.552683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:52.947 [2024-07-11 15:30:06.552754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:52.947 [2024-07-11 15:30:06.552774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:52.947 [2024-07-11 15:30:06.552786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.212 [2024-07-11 15:30:06.637109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.212 [2024-07-11 15:30:06.637172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:53.212 [2024-07-11 15:30:06.637205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.212 [2024-07-11 15:30:06.637217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.212 [2024-07-11 15:30:06.637287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.212 [2024-07-11 15:30:06.637304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:53.212 [2024-07-11 15:30:06.637323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.212 [2024-07-11 15:30:06.637334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.212 [2024-07-11 15:30:06.637375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.212 [2024-07-11 15:30:06.637388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:53.212 [2024-07-11 15:30:06.637399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.212 [2024-07-11 15:30:06.637410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.212 [2024-07-11 15:30:06.637558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.212 [2024-07-11 15:30:06.637577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:53.212 [2024-07-11 15:30:06.637595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.212 [2024-07-11 15:30:06.637607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.212 [2024-07-11 15:30:06.637681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.212 [2024-07-11 15:30:06.637700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:53.212 [2024-07-11 15:30:06.637713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.212 [2024-07-11 15:30:06.637724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.212 [2024-07-11 15:30:06.637785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.212 [2024-07-11 15:30:06.637810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:53.212 [2024-07-11 15:30:06.637824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.212 [2024-07-11 15:30:06.637842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.212 [2024-07-11 15:30:06.637895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.212 [2024-07-11 15:30:06.637910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:53.212 [2024-07-11 15:30:06.637922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.212 [2024-07-11 15:30:06.637934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.212 [2024-07-11 15:30:06.638131] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 494.335 ms, result 0 00:21:54.262 00:21:54.262 00:21:54.262 15:30:07 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:56.159 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:56.159 15:30:09 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:21:56.417 [2024-07-11 15:30:09.828740] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:56.417 [2024-07-11 15:30:09.828939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81967 ] 00:21:56.417 [2024-07-11 15:30:10.000633] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.676 [2024-07-11 15:30:10.207863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.934 [2024-07-11 15:30:10.514362] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:56.934 [2024-07-11 15:30:10.514449] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:57.194 [2024-07-11 15:30:10.674545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.194 [2024-07-11 15:30:10.674606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:57.194 [2024-07-11 15:30:10.674626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:57.194 [2024-07-11 15:30:10.674637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.194 [2024-07-11 15:30:10.674705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.194 [2024-07-11 15:30:10.674725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:57.194 [2024-07-11 15:30:10.674737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:57.194 [2024-07-11 15:30:10.674752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.194 [2024-07-11 15:30:10.674781] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:57.194 [2024-07-11 15:30:10.675744] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:57.194 [2024-07-11 15:30:10.675787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.194 [2024-07-11 15:30:10.675805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:57.194 [2024-07-11 15:30:10.675818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.012 ms 00:21:57.194 [2024-07-11 15:30:10.675830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.194 [2024-07-11 15:30:10.677046] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:57.194 [2024-07-11 15:30:10.693475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.194 [2024-07-11 15:30:10.693521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:57.194 [2024-07-11 15:30:10.693540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.429 ms 00:21:57.194 [2024-07-11 15:30:10.693551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.194 [2024-07-11 15:30:10.693624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.194 [2024-07-11 15:30:10.693644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:57.194 [2024-07-11 15:30:10.693661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:57.194 [2024-07-11 15:30:10.693672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.194 [2024-07-11 15:30:10.698095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.194 [2024-07-11 15:30:10.698139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:57.194 [2024-07-11 15:30:10.698156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.331 ms 00:21:57.194 [2024-07-11 15:30:10.698168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.194 [2024-07-11 15:30:10.698265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.194 [2024-07-11 15:30:10.698287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:57.194 [2024-07-11 15:30:10.698299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:57.194 [2024-07-11 15:30:10.698310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.194 [2024-07-11 15:30:10.698399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.194 [2024-07-11 15:30:10.698430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:57.194 [2024-07-11 15:30:10.698441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:57.194 [2024-07-11 15:30:10.698463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.194 [2024-07-11 15:30:10.698495] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:57.194 [2024-07-11 15:30:10.702817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.194 [2024-07-11 15:30:10.702856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:57.194 [2024-07-11 15:30:10.702872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.330 ms 00:21:57.194 [2024-07-11 15:30:10.702884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.194 [2024-07-11 15:30:10.702939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.194 [2024-07-11 15:30:10.702955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:57.194 [2024-07-11 15:30:10.702968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:57.194 [2024-07-11 15:30:10.702978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.194 [2024-07-11 15:30:10.703061] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:57.194 [2024-07-11 15:30:10.703093] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:57.194 [2024-07-11 15:30:10.703148] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:57.194 [2024-07-11 15:30:10.703183] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:57.194 [2024-07-11 15:30:10.703279] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:57.194 [2024-07-11 15:30:10.703293] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:57.194 [2024-07-11 15:30:10.703305] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:57.194 [2024-07-11 15:30:10.703319] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:57.194 [2024-07-11 15:30:10.703330] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:57.194 [2024-07-11 15:30:10.703341] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:57.194 [2024-07-11 15:30:10.703351] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:57.194 [2024-07-11 15:30:10.703361] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:57.194 [2024-07-11 15:30:10.703371] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:57.194 [2024-07-11 15:30:10.703382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.194 [2024-07-11 15:30:10.703396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:57.194 [2024-07-11 15:30:10.703406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:21:57.194 [2024-07-11 15:30:10.703416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.194 [2024-07-11 15:30:10.703514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.194 [2024-07-11 15:30:10.703527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:57.194 [2024-07-11 15:30:10.703537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:57.194 [2024-07-11 15:30:10.703547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.194 [2024-07-11 15:30:10.703669] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:57.194 [2024-07-11 15:30:10.703685] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:57.195 [2024-07-11 15:30:10.703701] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:57.195 [2024-07-11 15:30:10.703713] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.195 [2024-07-11 15:30:10.703723] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:57.195 [2024-07-11 15:30:10.703734] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:57.195 [2024-07-11 15:30:10.703744] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:57.195 [2024-07-11 15:30:10.703755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:57.195 [2024-07-11 15:30:10.703765] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:57.195 [2024-07-11 15:30:10.703775] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:57.195 [2024-07-11 15:30:10.703785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:57.195 [2024-07-11 15:30:10.703795] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:57.195 [2024-07-11 15:30:10.703805] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:57.195 [2024-07-11 15:30:10.703814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:57.195 [2024-07-11 15:30:10.703824] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:57.195 [2024-07-11 15:30:10.703836] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.195 [2024-07-11 15:30:10.703846] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:57.195 [2024-07-11 15:30:10.703856] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:57.195 [2024-07-11 15:30:10.703865] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.195 [2024-07-11 15:30:10.703876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:57.195 [2024-07-11 15:30:10.703915] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:57.195 [2024-07-11 15:30:10.703941] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.195 [2024-07-11 15:30:10.703951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:57.195 [2024-07-11 15:30:10.703960] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:57.195 [2024-07-11 15:30:10.703969] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.195 [2024-07-11 15:30:10.703978] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:57.195 [2024-07-11 15:30:10.703988] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:57.195 [2024-07-11 15:30:10.704014] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.195 [2024-07-11 15:30:10.704041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:57.195 [2024-07-11 15:30:10.704051] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:57.195 [2024-07-11 15:30:10.704060] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.195 [2024-07-11 15:30:10.704070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:57.195 [2024-07-11 15:30:10.704081] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:57.195 [2024-07-11 15:30:10.704091] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:57.195 [2024-07-11 15:30:10.704101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:57.195 [2024-07-11 15:30:10.704112] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:57.195 [2024-07-11 15:30:10.704122] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:57.195 [2024-07-11 15:30:10.704132] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:57.195 [2024-07-11 15:30:10.704142] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:57.195 [2024-07-11 15:30:10.704451] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.195 [2024-07-11 15:30:10.704512] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:57.195 [2024-07-11 15:30:10.704552] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:57.195 [2024-07-11 15:30:10.704590] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.195 [2024-07-11 15:30:10.704740] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:57.195 [2024-07-11 15:30:10.704810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:57.195 [2024-07-11 15:30:10.704872] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:57.195 [2024-07-11 15:30:10.704935] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.195 [2024-07-11 15:30:10.704998] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:57.195 [2024-07-11 15:30:10.705058] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:57.195 [2024-07-11 15:30:10.705099] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:57.195 [2024-07-11 15:30:10.705137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:57.195 [2024-07-11 15:30:10.705173] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:57.195 [2024-07-11 15:30:10.705281] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:57.195 [2024-07-11 15:30:10.705361] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:57.195 [2024-07-11 15:30:10.705490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:57.195 [2024-07-11 15:30:10.705565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:57.195 [2024-07-11 15:30:10.705643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:57.195 [2024-07-11 15:30:10.705702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:57.195 [2024-07-11 15:30:10.705843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:57.195 [2024-07-11 15:30:10.705905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:57.195 [2024-07-11 15:30:10.706049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:57.195 [2024-07-11 15:30:10.706072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:57.195 [2024-07-11 15:30:10.706085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:57.195 [2024-07-11 15:30:10.706097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:57.195 [2024-07-11 15:30:10.706107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:57.195 [2024-07-11 15:30:10.706119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:57.195 [2024-07-11 15:30:10.706130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:57.195 [2024-07-11 15:30:10.706144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:57.195 [2024-07-11 15:30:10.706156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:57.195 [2024-07-11 15:30:10.706167] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:57.195 [2024-07-11 15:30:10.706180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:57.195 [2024-07-11 15:30:10.706192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:57.195 [2024-07-11 15:30:10.706204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:57.195 [2024-07-11 15:30:10.706215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:57.195 [2024-07-11 15:30:10.706226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:57.195 [2024-07-11 15:30:10.706239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.195 [2024-07-11 15:30:10.706258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:57.195 [2024-07-11 15:30:10.706271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.635 ms 00:21:57.195 [2024-07-11 15:30:10.706283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.195 [2024-07-11 15:30:10.749830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.195 [2024-07-11 15:30:10.750169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:57.195 [2024-07-11 15:30:10.750309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.450 ms 00:21:57.195 [2024-07-11 15:30:10.750365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.195 [2024-07-11 15:30:10.750513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.195 [2024-07-11 15:30:10.750581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:57.195 [2024-07-11 15:30:10.750637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:57.195 [2024-07-11 15:30:10.750678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.195 [2024-07-11 15:30:10.789364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.195 [2024-07-11 15:30:10.789656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:57.195 [2024-07-11 15:30:10.789776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.564 ms 00:21:57.195 [2024-07-11 15:30:10.789828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.195 [2024-07-11 15:30:10.789925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.195 [2024-07-11 15:30:10.790083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:57.195 [2024-07-11 15:30:10.790155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:57.195 [2024-07-11 15:30:10.790194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.195 [2024-07-11 15:30:10.790600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.195 [2024-07-11 15:30:10.790786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:57.195 [2024-07-11 15:30:10.790899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:21:57.195 [2024-07-11 15:30:10.790948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.195 [2024-07-11 15:30:10.791155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.195 [2024-07-11 15:30:10.791221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:57.195 [2024-07-11 15:30:10.791326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:21:57.195 [2024-07-11 15:30:10.791375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.455 [2024-07-11 15:30:10.807791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.455 [2024-07-11 15:30:10.807961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:57.455 [2024-07-11 15:30:10.808149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.280 ms 00:21:57.455 [2024-07-11 15:30:10.808204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.455 [2024-07-11 15:30:10.824889] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:57.455 [2024-07-11 15:30:10.825109] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:57.455 [2024-07-11 15:30:10.825275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.455 [2024-07-11 15:30:10.825320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:57.455 [2024-07-11 15:30:10.825427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.827 ms 00:21:57.455 [2024-07-11 15:30:10.825476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.455 [2024-07-11 15:30:10.855233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.455 [2024-07-11 15:30:10.855403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:57.455 [2024-07-11 15:30:10.855537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.682 ms 00:21:57.455 [2024-07-11 15:30:10.855598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.455 [2024-07-11 15:30:10.871562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.455 [2024-07-11 15:30:10.871725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:57.455 [2024-07-11 15:30:10.871841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.886 ms 00:21:57.455 [2024-07-11 15:30:10.871890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.455 [2024-07-11 15:30:10.887984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.455 [2024-07-11 15:30:10.888160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:57.455 [2024-07-11 15:30:10.888308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.018 ms 00:21:57.455 [2024-07-11 15:30:10.888331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.455 [2024-07-11 15:30:10.889211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.455 [2024-07-11 15:30:10.889240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:57.455 [2024-07-11 15:30:10.889286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:21:57.455 [2024-07-11 15:30:10.889298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.455 [2024-07-11 15:30:10.976402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.455 [2024-07-11 15:30:10.976467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:57.455 [2024-07-11 15:30:10.976488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.078 ms 00:21:57.455 [2024-07-11 15:30:10.976500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.455 [2024-07-11 15:30:10.989509] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:57.455 [2024-07-11 15:30:10.992154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.455 [2024-07-11 15:30:10.992224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:57.455 [2024-07-11 15:30:10.992242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.572 ms 00:21:57.455 [2024-07-11 15:30:10.992254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.455 [2024-07-11 15:30:10.992370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.455 [2024-07-11 15:30:10.992390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:57.455 [2024-07-11 15:30:10.992404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:57.455 [2024-07-11 15:30:10.992415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.455 [2024-07-11 15:30:10.992526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.455 [2024-07-11 15:30:10.992557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:57.455 [2024-07-11 15:30:10.992571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:57.455 [2024-07-11 15:30:10.992582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.455 [2024-07-11 15:30:10.992618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.455 [2024-07-11 15:30:10.992633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:57.455 [2024-07-11 15:30:10.992644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:57.455 [2024-07-11 15:30:10.992654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.455 [2024-07-11 15:30:10.992693] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:57.455 [2024-07-11 15:30:10.992710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.455 [2024-07-11 15:30:10.992722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:57.455 [2024-07-11 15:30:10.992737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:57.455 [2024-07-11 15:30:10.992748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.455 [2024-07-11 15:30:11.024113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.455 [2024-07-11 15:30:11.024164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:57.455 [2024-07-11 15:30:11.024183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.340 ms 00:21:57.455 [2024-07-11 15:30:11.024196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.455 [2024-07-11 15:30:11.024280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.455 [2024-07-11 15:30:11.024309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:57.455 [2024-07-11 15:30:11.024322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:57.455 [2024-07-11 15:30:11.024333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.455 [2024-07-11 15:30:11.025435] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 350.373 ms, result 0 00:22:42.630  Copying: 24/1024 [MB] (24 MBps) Copying: 48/1024 [MB] (23 MBps) Copying: 71/1024 [MB] (23 MBps) Copying: 95/1024 [MB] (23 MBps) Copying: 118/1024 [MB] (23 MBps) Copying: 142/1024 [MB] (23 MBps) Copying: 165/1024 [MB] (23 MBps) Copying: 189/1024 [MB] (23 MBps) Copying: 212/1024 [MB] (23 MBps) Copying: 236/1024 [MB] (23 MBps) Copying: 259/1024 [MB] (23 MBps) Copying: 284/1024 [MB] (24 MBps) Copying: 307/1024 [MB] (23 MBps) Copying: 331/1024 [MB] (23 MBps) Copying: 354/1024 [MB] (23 MBps) Copying: 378/1024 [MB] (23 MBps) Copying: 401/1024 [MB] (22 MBps) Copying: 424/1024 [MB] (23 MBps) Copying: 447/1024 [MB] (23 MBps) Copying: 471/1024 [MB] (23 MBps) Copying: 494/1024 [MB] (23 MBps) Copying: 517/1024 [MB] (23 MBps) Copying: 540/1024 [MB] (22 MBps) Copying: 563/1024 [MB] (23 MBps) Copying: 586/1024 [MB] (23 MBps) Copying: 610/1024 [MB] (23 MBps) Copying: 633/1024 [MB] (23 MBps) Copying: 656/1024 [MB] (23 MBps) Copying: 681/1024 [MB] (24 MBps) Copying: 703/1024 [MB] (22 MBps) Copying: 726/1024 [MB] (23 MBps) Copying: 749/1024 [MB] (22 MBps) Copying: 771/1024 [MB] (22 MBps) Copying: 794/1024 [MB] (22 MBps) Copying: 817/1024 [MB] (23 MBps) Copying: 840/1024 [MB] (23 MBps) Copying: 863/1024 [MB] (22 MBps) Copying: 887/1024 [MB] (23 MBps) Copying: 910/1024 [MB] (22 MBps) Copying: 933/1024 [MB] (23 MBps) Copying: 956/1024 [MB] (22 MBps) Copying: 978/1024 [MB] (22 MBps) Copying: 1001/1024 [MB] (22 MBps) Copying: 1023/1024 [MB] (21 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-07-11 15:30:55.990745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.630 [2024-07-11 15:30:55.990844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:42.630 [2024-07-11 15:30:55.990866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:42.630 [2024-07-11 15:30:55.990879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.630 [2024-07-11 15:30:55.992879] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:42.630 [2024-07-11 15:30:55.998685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.630 [2024-07-11 15:30:55.998739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:42.630 [2024-07-11 15:30:55.998770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.728 ms 00:22:42.630 [2024-07-11 15:30:55.998780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.630 [2024-07-11 15:30:56.011256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.630 [2024-07-11 15:30:56.011296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:42.630 [2024-07-11 15:30:56.011327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.059 ms 00:22:42.630 [2024-07-11 15:30:56.011337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.630 [2024-07-11 15:30:56.031565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.630 [2024-07-11 15:30:56.031639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:42.630 [2024-07-11 15:30:56.031695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.209 ms 00:22:42.630 [2024-07-11 15:30:56.031706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.630 [2024-07-11 15:30:56.037443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.630 [2024-07-11 15:30:56.037474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:42.630 [2024-07-11 15:30:56.037503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.701 ms 00:22:42.630 [2024-07-11 15:30:56.037512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.630 [2024-07-11 15:30:56.064339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.630 [2024-07-11 15:30:56.064376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:42.630 [2024-07-11 15:30:56.064407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.768 ms 00:22:42.630 [2024-07-11 15:30:56.064416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.630 [2024-07-11 15:30:56.080118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.630 [2024-07-11 15:30:56.080154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:42.630 [2024-07-11 15:30:56.080185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.664 ms 00:22:42.630 [2024-07-11 15:30:56.080201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.630 [2024-07-11 15:30:56.169320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.630 [2024-07-11 15:30:56.169387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:42.630 [2024-07-11 15:30:56.169424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.076 ms 00:22:42.630 [2024-07-11 15:30:56.169435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.630 [2024-07-11 15:30:56.201099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.630 [2024-07-11 15:30:56.201184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:42.630 [2024-07-11 15:30:56.201217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.643 ms 00:22:42.630 [2024-07-11 15:30:56.201228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.630 [2024-07-11 15:30:56.230775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.630 [2024-07-11 15:30:56.230813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:42.630 [2024-07-11 15:30:56.230843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.468 ms 00:22:42.630 [2024-07-11 15:30:56.230853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.902 [2024-07-11 15:30:56.263618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.902 [2024-07-11 15:30:56.263663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:42.902 [2024-07-11 15:30:56.263695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.724 ms 00:22:42.902 [2024-07-11 15:30:56.263706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.902 [2024-07-11 15:30:56.292009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.902 [2024-07-11 15:30:56.292055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:42.902 [2024-07-11 15:30:56.292070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.169 ms 00:22:42.902 [2024-07-11 15:30:56.292080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.902 [2024-07-11 15:30:56.292119] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:42.902 [2024-07-11 15:30:56.292140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 92160 / 261120 wr_cnt: 1 state: open 00:22:42.902 [2024-07-11 15:30:56.292152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:42.902 [2024-07-11 15:30:56.292821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.292831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.292840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.292850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.292860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.292869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.292879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.292888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.292898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.292908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.292917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.292927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.292937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.292946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.292957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.292966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.292976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.292986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.292995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.293005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.293015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.293059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.293071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.293081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.293098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.293108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.293118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.293128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.293154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.293164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:42.903 [2024-07-11 15:30:56.293182] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:42.903 [2024-07-11 15:30:56.293193] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ae11445a-6c38-4afe-8ce2-49581ac79788 00:22:42.903 [2024-07-11 15:30:56.293203] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 92160 00:22:42.903 [2024-07-11 15:30:56.293212] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 93120 00:22:42.903 [2024-07-11 15:30:56.293222] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 92160 00:22:42.903 [2024-07-11 15:30:56.293234] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0104 00:22:42.903 [2024-07-11 15:30:56.293244] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:42.903 [2024-07-11 15:30:56.293259] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:42.903 [2024-07-11 15:30:56.293268] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:42.903 [2024-07-11 15:30:56.293277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:42.903 [2024-07-11 15:30:56.293285] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:42.903 [2024-07-11 15:30:56.293295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.903 [2024-07-11 15:30:56.293308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:42.903 [2024-07-11 15:30:56.293319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.178 ms 00:22:42.903 [2024-07-11 15:30:56.293328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.903 [2024-07-11 15:30:56.310091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.903 [2024-07-11 15:30:56.310135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:42.903 [2024-07-11 15:30:56.310179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.725 ms 00:22:42.903 [2024-07-11 15:30:56.310192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.903 [2024-07-11 15:30:56.310687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.903 [2024-07-11 15:30:56.310721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:42.903 [2024-07-11 15:30:56.310736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:22:42.903 [2024-07-11 15:30:56.310747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.903 [2024-07-11 15:30:56.348479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.903 [2024-07-11 15:30:56.348547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:42.903 [2024-07-11 15:30:56.348581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.903 [2024-07-11 15:30:56.348592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.903 [2024-07-11 15:30:56.348668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.903 [2024-07-11 15:30:56.348684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:42.903 [2024-07-11 15:30:56.348696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.903 [2024-07-11 15:30:56.348707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.903 [2024-07-11 15:30:56.348784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.903 [2024-07-11 15:30:56.348803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:42.903 [2024-07-11 15:30:56.348815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.903 [2024-07-11 15:30:56.348827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.903 [2024-07-11 15:30:56.348854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.903 [2024-07-11 15:30:56.348869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:42.903 [2024-07-11 15:30:56.348880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.903 [2024-07-11 15:30:56.348891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.903 [2024-07-11 15:30:56.445956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.903 [2024-07-11 15:30:56.446068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:42.903 [2024-07-11 15:30:56.446091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.903 [2024-07-11 15:30:56.446103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.172 [2024-07-11 15:30:56.529291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.172 [2024-07-11 15:30:56.529376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:43.172 [2024-07-11 15:30:56.529412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.172 [2024-07-11 15:30:56.529424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.172 [2024-07-11 15:30:56.529503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.172 [2024-07-11 15:30:56.529519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:43.172 [2024-07-11 15:30:56.529531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.172 [2024-07-11 15:30:56.529543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.172 [2024-07-11 15:30:56.529586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.172 [2024-07-11 15:30:56.529609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:43.172 [2024-07-11 15:30:56.529622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.172 [2024-07-11 15:30:56.529632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.172 [2024-07-11 15:30:56.529749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.172 [2024-07-11 15:30:56.529768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:43.172 [2024-07-11 15:30:56.529780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.172 [2024-07-11 15:30:56.529791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.172 [2024-07-11 15:30:56.529837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.172 [2024-07-11 15:30:56.529854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:43.172 [2024-07-11 15:30:56.529872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.172 [2024-07-11 15:30:56.529882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.172 [2024-07-11 15:30:56.529925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.172 [2024-07-11 15:30:56.529940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:43.172 [2024-07-11 15:30:56.529952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.172 [2024-07-11 15:30:56.529962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.172 [2024-07-11 15:30:56.530011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.172 [2024-07-11 15:30:56.530069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:43.172 [2024-07-11 15:30:56.530083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.172 [2024-07-11 15:30:56.530094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.172 [2024-07-11 15:30:56.530232] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 540.314 ms, result 0 00:22:44.546 00:22:44.546 00:22:44.546 15:30:57 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:44.546 [2024-07-11 15:30:58.032807] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:44.547 [2024-07-11 15:30:58.033245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82439 ] 00:22:44.804 [2024-07-11 15:30:58.202646] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.804 [2024-07-11 15:30:58.379722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.062 [2024-07-11 15:30:58.669688] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:45.062 [2024-07-11 15:30:58.669781] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:45.322 [2024-07-11 15:30:58.827168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.322 [2024-07-11 15:30:58.827217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:45.322 [2024-07-11 15:30:58.827251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:45.322 [2024-07-11 15:30:58.827261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.322 [2024-07-11 15:30:58.827322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.322 [2024-07-11 15:30:58.827341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:45.322 [2024-07-11 15:30:58.827351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:45.322 [2024-07-11 15:30:58.827364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.322 [2024-07-11 15:30:58.827390] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:45.322 [2024-07-11 15:30:58.828167] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:45.322 [2024-07-11 15:30:58.828196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.322 [2024-07-11 15:30:58.828212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:45.322 [2024-07-11 15:30:58.828223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.812 ms 00:22:45.322 [2024-07-11 15:30:58.828232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.322 [2024-07-11 15:30:58.829401] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:45.322 [2024-07-11 15:30:58.844519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.322 [2024-07-11 15:30:58.844559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:45.322 [2024-07-11 15:30:58.844591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.119 ms 00:22:45.322 [2024-07-11 15:30:58.844606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.322 [2024-07-11 15:30:58.844687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.322 [2024-07-11 15:30:58.844712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:45.322 [2024-07-11 15:30:58.844735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:45.322 [2024-07-11 15:30:58.844749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.322 [2024-07-11 15:30:58.849151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.322 [2024-07-11 15:30:58.849189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:45.322 [2024-07-11 15:30:58.849218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.323 ms 00:22:45.322 [2024-07-11 15:30:58.849228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.322 [2024-07-11 15:30:58.849310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.322 [2024-07-11 15:30:58.849329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:45.322 [2024-07-11 15:30:58.849340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:22:45.322 [2024-07-11 15:30:58.849349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.322 [2024-07-11 15:30:58.849404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.322 [2024-07-11 15:30:58.849435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:45.322 [2024-07-11 15:30:58.849460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:45.322 [2024-07-11 15:30:58.849470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.322 [2024-07-11 15:30:58.849498] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:45.322 [2024-07-11 15:30:58.853300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.322 [2024-07-11 15:30:58.853331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:45.322 [2024-07-11 15:30:58.853361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.809 ms 00:22:45.322 [2024-07-11 15:30:58.853371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.322 [2024-07-11 15:30:58.853430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.322 [2024-07-11 15:30:58.853445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:45.322 [2024-07-11 15:30:58.853455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:45.322 [2024-07-11 15:30:58.853465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.322 [2024-07-11 15:30:58.853500] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:45.322 [2024-07-11 15:30:58.853528] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:45.322 [2024-07-11 15:30:58.853565] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:45.322 [2024-07-11 15:30:58.853585] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:45.322 [2024-07-11 15:30:58.853674] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:45.322 [2024-07-11 15:30:58.853687] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:45.322 [2024-07-11 15:30:58.853700] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:45.322 [2024-07-11 15:30:58.853712] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:45.322 [2024-07-11 15:30:58.853723] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:45.322 [2024-07-11 15:30:58.853734] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:45.322 [2024-07-11 15:30:58.853744] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:45.322 [2024-07-11 15:30:58.853753] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:45.322 [2024-07-11 15:30:58.853762] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:45.322 [2024-07-11 15:30:58.853772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.322 [2024-07-11 15:30:58.853785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:45.322 [2024-07-11 15:30:58.853796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:22:45.322 [2024-07-11 15:30:58.853805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.322 [2024-07-11 15:30:58.853892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.322 [2024-07-11 15:30:58.853903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:45.322 [2024-07-11 15:30:58.853913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:45.322 [2024-07-11 15:30:58.853923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.322 [2024-07-11 15:30:58.854012] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:45.322 [2024-07-11 15:30:58.854088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:45.322 [2024-07-11 15:30:58.854106] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:45.322 [2024-07-11 15:30:58.854117] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.322 [2024-07-11 15:30:58.854128] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:45.322 [2024-07-11 15:30:58.854138] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:45.322 [2024-07-11 15:30:58.854148] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:45.322 [2024-07-11 15:30:58.854159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:45.322 [2024-07-11 15:30:58.854169] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:45.322 [2024-07-11 15:30:58.854178] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:45.322 [2024-07-11 15:30:58.854188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:45.322 [2024-07-11 15:30:58.854198] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:45.322 [2024-07-11 15:30:58.854208] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:45.322 [2024-07-11 15:30:58.854217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:45.322 [2024-07-11 15:30:58.854227] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:45.322 [2024-07-11 15:30:58.854238] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.322 [2024-07-11 15:30:58.854248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:45.322 [2024-07-11 15:30:58.854257] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:45.322 [2024-07-11 15:30:58.854267] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.322 [2024-07-11 15:30:58.854277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:45.322 [2024-07-11 15:30:58.854298] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:45.322 [2024-07-11 15:30:58.854308] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:45.322 [2024-07-11 15:30:58.854317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:45.322 [2024-07-11 15:30:58.854327] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:45.322 [2024-07-11 15:30:58.854336] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:45.322 [2024-07-11 15:30:58.854346] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:45.322 [2024-07-11 15:30:58.854370] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:45.322 [2024-07-11 15:30:58.854379] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:45.322 [2024-07-11 15:30:58.854388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:45.322 [2024-07-11 15:30:58.854397] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:45.322 [2024-07-11 15:30:58.854422] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:45.322 [2024-07-11 15:30:58.854453] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:45.322 [2024-07-11 15:30:58.854478] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:45.322 [2024-07-11 15:30:58.854502] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:45.322 [2024-07-11 15:30:58.854511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:45.322 [2024-07-11 15:30:58.854520] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:45.322 [2024-07-11 15:30:58.854544] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:45.322 [2024-07-11 15:30:58.854553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:45.322 [2024-07-11 15:30:58.854563] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:45.322 [2024-07-11 15:30:58.854572] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.322 [2024-07-11 15:30:58.854582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:45.323 [2024-07-11 15:30:58.854592] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:45.323 [2024-07-11 15:30:58.854601] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.323 [2024-07-11 15:30:58.854610] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:45.323 [2024-07-11 15:30:58.854620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:45.323 [2024-07-11 15:30:58.854630] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:45.323 [2024-07-11 15:30:58.854640] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.323 [2024-07-11 15:30:58.854652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:45.323 [2024-07-11 15:30:58.854662] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:45.323 [2024-07-11 15:30:58.854671] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:45.323 [2024-07-11 15:30:58.854681] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:45.323 [2024-07-11 15:30:58.854690] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:45.323 [2024-07-11 15:30:58.854700] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:45.323 [2024-07-11 15:30:58.854712] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:45.323 [2024-07-11 15:30:58.854725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:45.323 [2024-07-11 15:30:58.854736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:45.323 [2024-07-11 15:30:58.854747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:45.323 [2024-07-11 15:30:58.854757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:45.323 [2024-07-11 15:30:58.854768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:45.323 [2024-07-11 15:30:58.854779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:45.323 [2024-07-11 15:30:58.854789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:45.323 [2024-07-11 15:30:58.854799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:45.323 [2024-07-11 15:30:58.854809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:45.323 [2024-07-11 15:30:58.854819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:45.323 [2024-07-11 15:30:58.854830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:45.323 [2024-07-11 15:30:58.854840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:45.323 [2024-07-11 15:30:58.854850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:45.323 [2024-07-11 15:30:58.854860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:45.323 [2024-07-11 15:30:58.854871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:45.323 [2024-07-11 15:30:58.854881] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:45.323 [2024-07-11 15:30:58.854892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:45.323 [2024-07-11 15:30:58.854904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:45.323 [2024-07-11 15:30:58.854915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:45.323 [2024-07-11 15:30:58.854925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:45.323 [2024-07-11 15:30:58.854936] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:45.323 [2024-07-11 15:30:58.854948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.323 [2024-07-11 15:30:58.854963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:45.323 [2024-07-11 15:30:58.854975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:22:45.323 [2024-07-11 15:30:58.854985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.323 [2024-07-11 15:30:58.891869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.323 [2024-07-11 15:30:58.891929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:45.323 [2024-07-11 15:30:58.891963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.805 ms 00:22:45.323 [2024-07-11 15:30:58.891974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.323 [2024-07-11 15:30:58.892111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.323 [2024-07-11 15:30:58.892129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:45.323 [2024-07-11 15:30:58.892140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:22:45.323 [2024-07-11 15:30:58.892150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.323 [2024-07-11 15:30:58.924735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.323 [2024-07-11 15:30:58.924783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:45.323 [2024-07-11 15:30:58.924815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.485 ms 00:22:45.323 [2024-07-11 15:30:58.924825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.323 [2024-07-11 15:30:58.924877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.323 [2024-07-11 15:30:58.924891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:45.323 [2024-07-11 15:30:58.924902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:45.323 [2024-07-11 15:30:58.924911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.323 [2024-07-11 15:30:58.925323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.323 [2024-07-11 15:30:58.925341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:45.323 [2024-07-11 15:30:58.925353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:22:45.323 [2024-07-11 15:30:58.925362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.323 [2024-07-11 15:30:58.925574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.323 [2024-07-11 15:30:58.925608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:45.323 [2024-07-11 15:30:58.925622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:22:45.323 [2024-07-11 15:30:58.925633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.581 [2024-07-11 15:30:58.940763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.581 [2024-07-11 15:30:58.940802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:45.581 [2024-07-11 15:30:58.940848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.104 ms 00:22:45.581 [2024-07-11 15:30:58.940858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.581 [2024-07-11 15:30:58.955075] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:45.581 [2024-07-11 15:30:58.955124] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:45.581 [2024-07-11 15:30:58.955157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.581 [2024-07-11 15:30:58.955167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:45.581 [2024-07-11 15:30:58.955178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.171 ms 00:22:45.581 [2024-07-11 15:30:58.955187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.581 [2024-07-11 15:30:58.980435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.582 [2024-07-11 15:30:58.980488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:45.582 [2024-07-11 15:30:58.980519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.207 ms 00:22:45.582 [2024-07-11 15:30:58.980535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.582 [2024-07-11 15:30:58.993988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.582 [2024-07-11 15:30:58.994075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:45.582 [2024-07-11 15:30:58.994109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.410 ms 00:22:45.582 [2024-07-11 15:30:58.994120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.582 [2024-07-11 15:30:59.007481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.582 [2024-07-11 15:30:59.007517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:45.582 [2024-07-11 15:30:59.007547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.311 ms 00:22:45.582 [2024-07-11 15:30:59.007556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.582 [2024-07-11 15:30:59.008289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.582 [2024-07-11 15:30:59.008326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:45.582 [2024-07-11 15:30:59.008341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:22:45.582 [2024-07-11 15:30:59.008352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.582 [2024-07-11 15:30:59.072168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.582 [2024-07-11 15:30:59.072234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:45.582 [2024-07-11 15:30:59.072269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.792 ms 00:22:45.582 [2024-07-11 15:30:59.072280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.582 [2024-07-11 15:30:59.084526] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:45.582 [2024-07-11 15:30:59.086968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.582 [2024-07-11 15:30:59.087001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:45.582 [2024-07-11 15:30:59.087047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.598 ms 00:22:45.582 [2024-07-11 15:30:59.087106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.582 [2024-07-11 15:30:59.087198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.582 [2024-07-11 15:30:59.087218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:45.582 [2024-07-11 15:30:59.087230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:45.582 [2024-07-11 15:30:59.087241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.582 [2024-07-11 15:30:59.088669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.582 [2024-07-11 15:30:59.088704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:45.582 [2024-07-11 15:30:59.088732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.378 ms 00:22:45.582 [2024-07-11 15:30:59.088742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.582 [2024-07-11 15:30:59.088774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.582 [2024-07-11 15:30:59.088788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:45.582 [2024-07-11 15:30:59.088799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:45.582 [2024-07-11 15:30:59.088808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.582 [2024-07-11 15:30:59.088842] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:45.582 [2024-07-11 15:30:59.088855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.582 [2024-07-11 15:30:59.088865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:45.582 [2024-07-11 15:30:59.088878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:45.582 [2024-07-11 15:30:59.088887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.582 [2024-07-11 15:30:59.116313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.582 [2024-07-11 15:30:59.116351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:45.582 [2024-07-11 15:30:59.116384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.404 ms 00:22:45.582 [2024-07-11 15:30:59.116394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.582 [2024-07-11 15:30:59.116462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.582 [2024-07-11 15:30:59.116487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:45.582 [2024-07-11 15:30:59.116498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:45.582 [2024-07-11 15:30:59.116508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.582 [2024-07-11 15:30:59.121811] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 292.491 ms, result 0 00:23:28.718  Copying: 15/1024 [MB] (15 MBps) Copying: 39/1024 [MB] (23 MBps) Copying: 62/1024 [MB] (23 MBps) Copying: 85/1024 [MB] (23 MBps) Copying: 107/1024 [MB] (21 MBps) Copying: 130/1024 [MB] (23 MBps) Copying: 154/1024 [MB] (23 MBps) Copying: 178/1024 [MB] (24 MBps) Copying: 203/1024 [MB] (24 MBps) Copying: 227/1024 [MB] (24 MBps) Copying: 251/1024 [MB] (24 MBps) Copying: 276/1024 [MB] (24 MBps) Copying: 300/1024 [MB] (24 MBps) Copying: 325/1024 [MB] (24 MBps) Copying: 349/1024 [MB] (23 MBps) Copying: 373/1024 [MB] (24 MBps) Copying: 398/1024 [MB] (25 MBps) Copying: 422/1024 [MB] (24 MBps) Copying: 446/1024 [MB] (23 MBps) Copying: 471/1024 [MB] (24 MBps) Copying: 496/1024 [MB] (24 MBps) Copying: 520/1024 [MB] (24 MBps) Copying: 545/1024 [MB] (24 MBps) Copying: 569/1024 [MB] (23 MBps) Copying: 593/1024 [MB] (23 MBps) Copying: 617/1024 [MB] (24 MBps) Copying: 641/1024 [MB] (24 MBps) Copying: 665/1024 [MB] (24 MBps) Copying: 689/1024 [MB] (24 MBps) Copying: 714/1024 [MB] (24 MBps) Copying: 739/1024 [MB] (24 MBps) Copying: 763/1024 [MB] (24 MBps) Copying: 787/1024 [MB] (24 MBps) Copying: 811/1024 [MB] (24 MBps) Copying: 836/1024 [MB] (24 MBps) Copying: 860/1024 [MB] (24 MBps) Copying: 884/1024 [MB] (24 MBps) Copying: 908/1024 [MB] (24 MBps) Copying: 933/1024 [MB] (24 MBps) Copying: 957/1024 [MB] (24 MBps) Copying: 981/1024 [MB] (23 MBps) Copying: 1006/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-11 15:31:42.094781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.718 [2024-07-11 15:31:42.095114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:28.718 [2024-07-11 15:31:42.095144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:28.718 [2024-07-11 15:31:42.095161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.718 [2024-07-11 15:31:42.095205] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:28.718 [2024-07-11 15:31:42.099717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.718 [2024-07-11 15:31:42.099753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:28.718 [2024-07-11 15:31:42.099769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.483 ms 00:23:28.718 [2024-07-11 15:31:42.099780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.718 [2024-07-11 15:31:42.100006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.718 [2024-07-11 15:31:42.100051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:28.718 [2024-07-11 15:31:42.100065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:23:28.718 [2024-07-11 15:31:42.100083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.718 [2024-07-11 15:31:42.106077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.718 [2024-07-11 15:31:42.106123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:28.718 [2024-07-11 15:31:42.106149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.973 ms 00:23:28.718 [2024-07-11 15:31:42.106161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.718 [2024-07-11 15:31:42.113000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.718 [2024-07-11 15:31:42.113061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:28.718 [2024-07-11 15:31:42.113094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.797 ms 00:23:28.718 [2024-07-11 15:31:42.113106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.718 [2024-07-11 15:31:42.143247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.718 [2024-07-11 15:31:42.143318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:28.718 [2024-07-11 15:31:42.143337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.075 ms 00:23:28.718 [2024-07-11 15:31:42.143348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.718 [2024-07-11 15:31:42.159941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.718 [2024-07-11 15:31:42.159983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:28.718 [2024-07-11 15:31:42.160015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.549 ms 00:23:28.718 [2024-07-11 15:31:42.160047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.718 [2024-07-11 15:31:42.278944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.718 [2024-07-11 15:31:42.279072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:28.718 [2024-07-11 15:31:42.279096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 118.828 ms 00:23:28.718 [2024-07-11 15:31:42.279107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.718 [2024-07-11 15:31:42.308612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.718 [2024-07-11 15:31:42.308655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:28.718 [2024-07-11 15:31:42.308688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.472 ms 00:23:28.718 [2024-07-11 15:31:42.308698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.978 [2024-07-11 15:31:42.337816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.978 [2024-07-11 15:31:42.337856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:28.978 [2024-07-11 15:31:42.337888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.076 ms 00:23:28.978 [2024-07-11 15:31:42.337898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.978 [2024-07-11 15:31:42.365757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.978 [2024-07-11 15:31:42.365795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:28.978 [2024-07-11 15:31:42.365826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.819 ms 00:23:28.978 [2024-07-11 15:31:42.365850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.978 [2024-07-11 15:31:42.393688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.978 [2024-07-11 15:31:42.393729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:28.978 [2024-07-11 15:31:42.393760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.757 ms 00:23:28.978 [2024-07-11 15:31:42.393770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.978 [2024-07-11 15:31:42.393814] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:28.978 [2024-07-11 15:31:42.393835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:23:28.978 [2024-07-11 15:31:42.393848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:28.978 [2024-07-11 15:31:42.393859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:28.978 [2024-07-11 15:31:42.393869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:28.978 [2024-07-11 15:31:42.393880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:28.978 [2024-07-11 15:31:42.393890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:28.978 [2024-07-11 15:31:42.393900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:28.978 [2024-07-11 15:31:42.393910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:28.978 [2024-07-11 15:31:42.393920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.393931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.393941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.393951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.393962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.393972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.393982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.393993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.394994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.395006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.395017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.395028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.395039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.395050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.395062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.395074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.395085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.395096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.395107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:28.979 [2024-07-11 15:31:42.395137] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:28.979 [2024-07-11 15:31:42.395151] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ae11445a-6c38-4afe-8ce2-49581ac79788 00:23:28.979 [2024-07-11 15:31:42.395164] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:23:28.979 [2024-07-11 15:31:42.395175] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 42688 00:23:28.979 [2024-07-11 15:31:42.395185] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 41728 00:23:28.979 [2024-07-11 15:31:42.395197] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0230 00:23:28.980 [2024-07-11 15:31:42.395208] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:28.980 [2024-07-11 15:31:42.395226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:28.980 [2024-07-11 15:31:42.395236] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:28.980 [2024-07-11 15:31:42.395246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:28.980 [2024-07-11 15:31:42.395256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:28.980 [2024-07-11 15:31:42.395267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.980 [2024-07-11 15:31:42.395278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:28.980 [2024-07-11 15:31:42.395293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.454 ms 00:23:28.980 [2024-07-11 15:31:42.395304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.980 [2024-07-11 15:31:42.412305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.980 [2024-07-11 15:31:42.412354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:28.980 [2024-07-11 15:31:42.412386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.961 ms 00:23:28.980 [2024-07-11 15:31:42.412428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.980 [2024-07-11 15:31:42.412879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.980 [2024-07-11 15:31:42.412899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:28.980 [2024-07-11 15:31:42.412912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:23:28.980 [2024-07-11 15:31:42.412921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.980 [2024-07-11 15:31:42.449484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.980 [2024-07-11 15:31:42.449539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:28.980 [2024-07-11 15:31:42.449573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.980 [2024-07-11 15:31:42.449584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.980 [2024-07-11 15:31:42.449669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.980 [2024-07-11 15:31:42.449684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:28.980 [2024-07-11 15:31:42.449696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.980 [2024-07-11 15:31:42.449707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.980 [2024-07-11 15:31:42.449823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.980 [2024-07-11 15:31:42.449842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:28.980 [2024-07-11 15:31:42.449854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.980 [2024-07-11 15:31:42.449864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.980 [2024-07-11 15:31:42.449889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.980 [2024-07-11 15:31:42.449903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:28.980 [2024-07-11 15:31:42.449913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.980 [2024-07-11 15:31:42.449923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.980 [2024-07-11 15:31:42.539394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:28.980 [2024-07-11 15:31:42.539467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:28.980 [2024-07-11 15:31:42.539500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:28.980 [2024-07-11 15:31:42.539512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.239 [2024-07-11 15:31:42.616294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.239 [2024-07-11 15:31:42.616364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:29.239 [2024-07-11 15:31:42.616399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.239 [2024-07-11 15:31:42.616410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.239 [2024-07-11 15:31:42.616480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.239 [2024-07-11 15:31:42.616495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:29.239 [2024-07-11 15:31:42.616506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.239 [2024-07-11 15:31:42.616517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.239 [2024-07-11 15:31:42.616555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.239 [2024-07-11 15:31:42.616569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:29.239 [2024-07-11 15:31:42.616588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.239 [2024-07-11 15:31:42.616599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.239 [2024-07-11 15:31:42.616710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.239 [2024-07-11 15:31:42.616727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:29.239 [2024-07-11 15:31:42.616740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.239 [2024-07-11 15:31:42.616751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.239 [2024-07-11 15:31:42.616800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.239 [2024-07-11 15:31:42.616816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:29.239 [2024-07-11 15:31:42.616833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.239 [2024-07-11 15:31:42.616844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.239 [2024-07-11 15:31:42.616887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.239 [2024-07-11 15:31:42.616902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:29.239 [2024-07-11 15:31:42.616913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.239 [2024-07-11 15:31:42.616923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.239 [2024-07-11 15:31:42.616972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.239 [2024-07-11 15:31:42.616987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:29.239 [2024-07-11 15:31:42.617002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.239 [2024-07-11 15:31:42.617013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.239 [2024-07-11 15:31:42.617283] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 522.418 ms, result 0 00:23:30.173 00:23:30.173 00:23:30.173 15:31:43 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:32.085 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:32.085 15:31:45 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:32.085 15:31:45 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:32.085 15:31:45 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:32.344 15:31:45 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:32.344 15:31:45 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:32.344 Process with pid 80835 is not found 00:23:32.344 15:31:45 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 80835 00:23:32.344 15:31:45 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 80835 ']' 00:23:32.344 15:31:45 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 80835 00:23:32.344 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (80835) - No such process 00:23:32.344 15:31:45 ftl.ftl_restore -- common/autotest_common.sh@975 -- # echo 'Process with pid 80835 is not found' 00:23:32.344 15:31:45 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:32.344 15:31:45 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:32.344 Remove shared memory files 00:23:32.344 15:31:45 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:32.344 15:31:45 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:32.344 15:31:45 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:32.344 15:31:45 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:32.344 15:31:45 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:32.344 ************************************ 00:23:32.344 END TEST ftl_restore 00:23:32.344 ************************************ 00:23:32.344 00:23:32.344 real 3m25.453s 00:23:32.344 user 3m11.558s 00:23:32.344 sys 0m15.640s 00:23:32.344 15:31:45 ftl.ftl_restore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:32.344 15:31:45 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:32.344 15:31:45 ftl -- common/autotest_common.sh@1142 -- # return 0 00:23:32.344 15:31:45 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:32.344 15:31:45 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:32.344 15:31:45 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:32.344 15:31:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:32.344 ************************************ 00:23:32.344 START TEST ftl_dirty_shutdown 00:23:32.344 ************************************ 00:23:32.344 15:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:32.344 * Looking for test storage... 00:23:32.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:32.344 15:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:32.344 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:32.344 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:32.344 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=82970 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 82970 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@829 -- # '[' -z 82970 ']' 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:32.603 15:31:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:32.603 [2024-07-11 15:31:46.071239] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:32.603 [2024-07-11 15:31:46.071395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82970 ] 00:23:32.861 [2024-07-11 15:31:46.232822] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.861 [2024-07-11 15:31:46.460987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.796 15:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:33.796 15:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # return 0 00:23:33.796 15:31:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:33.796 15:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:33.796 15:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:33.796 15:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:33.796 15:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:33.796 15:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:33.796 15:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:33.796 15:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:33.796 15:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:33.796 15:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:33.796 15:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:33.796 15:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:33.796 15:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:33.796 15:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:34.362 15:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:34.362 { 00:23:34.362 "name": "nvme0n1", 00:23:34.362 "aliases": [ 00:23:34.362 "7de23b56-7d66-4619-b00e-502d2bb7d097" 00:23:34.362 ], 00:23:34.362 "product_name": "NVMe disk", 00:23:34.362 "block_size": 4096, 00:23:34.362 "num_blocks": 1310720, 00:23:34.362 "uuid": "7de23b56-7d66-4619-b00e-502d2bb7d097", 00:23:34.362 "assigned_rate_limits": { 00:23:34.362 "rw_ios_per_sec": 0, 00:23:34.362 "rw_mbytes_per_sec": 0, 00:23:34.362 "r_mbytes_per_sec": 0, 00:23:34.362 "w_mbytes_per_sec": 0 00:23:34.362 }, 00:23:34.362 "claimed": true, 00:23:34.362 "claim_type": "read_many_write_one", 00:23:34.362 "zoned": false, 00:23:34.362 "supported_io_types": { 00:23:34.362 "read": true, 00:23:34.362 "write": true, 00:23:34.362 "unmap": true, 00:23:34.362 "flush": true, 00:23:34.362 "reset": true, 00:23:34.362 "nvme_admin": true, 00:23:34.362 "nvme_io": true, 00:23:34.362 "nvme_io_md": false, 00:23:34.362 "write_zeroes": true, 00:23:34.362 "zcopy": false, 00:23:34.362 "get_zone_info": false, 00:23:34.362 "zone_management": false, 00:23:34.362 "zone_append": false, 00:23:34.362 "compare": true, 00:23:34.362 "compare_and_write": false, 00:23:34.362 "abort": true, 00:23:34.362 "seek_hole": false, 00:23:34.362 "seek_data": false, 00:23:34.362 "copy": true, 00:23:34.362 "nvme_iov_md": false 00:23:34.362 }, 00:23:34.362 "driver_specific": { 00:23:34.362 "nvme": [ 00:23:34.362 { 00:23:34.362 "pci_address": "0000:00:11.0", 00:23:34.362 "trid": { 00:23:34.362 "trtype": "PCIe", 00:23:34.362 "traddr": "0000:00:11.0" 00:23:34.362 }, 00:23:34.362 "ctrlr_data": { 00:23:34.362 "cntlid": 0, 00:23:34.362 "vendor_id": "0x1b36", 00:23:34.362 "model_number": "QEMU NVMe Ctrl", 00:23:34.362 "serial_number": "12341", 00:23:34.362 "firmware_revision": "8.0.0", 00:23:34.362 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:34.362 "oacs": { 00:23:34.362 "security": 0, 00:23:34.362 "format": 1, 00:23:34.362 "firmware": 0, 00:23:34.362 "ns_manage": 1 00:23:34.362 }, 00:23:34.362 "multi_ctrlr": false, 00:23:34.362 "ana_reporting": false 00:23:34.362 }, 00:23:34.362 "vs": { 00:23:34.362 "nvme_version": "1.4" 00:23:34.362 }, 00:23:34.362 "ns_data": { 00:23:34.362 "id": 1, 00:23:34.362 "can_share": false 00:23:34.362 } 00:23:34.362 } 00:23:34.362 ], 00:23:34.362 "mp_policy": "active_passive" 00:23:34.362 } 00:23:34.362 } 00:23:34.362 ]' 00:23:34.362 15:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:34.362 15:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:34.362 15:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:34.362 15:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:34.362 15:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:34.362 15:31:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:23:34.362 15:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:34.363 15:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:34.363 15:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:34.363 15:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:34.363 15:31:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:34.620 15:31:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=da5579c1-3d53-4ce5-9d75-a73c22500a11 00:23:34.620 15:31:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:34.620 15:31:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u da5579c1-3d53-4ce5-9d75-a73c22500a11 00:23:34.878 15:31:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:35.137 15:31:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=a57df313-a1fd-457c-9ca1-8a108a6a3ac4 00:23:35.137 15:31:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a57df313-a1fd-457c-9ca1-8a108a6a3ac4 00:23:35.396 15:31:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=c1e5804d-3c33-4eb2-a321-0d07f6fda0d6 00:23:35.396 15:31:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:35.396 15:31:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c1e5804d-3c33-4eb2-a321-0d07f6fda0d6 00:23:35.396 15:31:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:35.396 15:31:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:35.396 15:31:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=c1e5804d-3c33-4eb2-a321-0d07f6fda0d6 00:23:35.396 15:31:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:35.396 15:31:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size c1e5804d-3c33-4eb2-a321-0d07f6fda0d6 00:23:35.396 15:31:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=c1e5804d-3c33-4eb2-a321-0d07f6fda0d6 00:23:35.396 15:31:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:35.396 15:31:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:35.396 15:31:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:35.396 15:31:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c1e5804d-3c33-4eb2-a321-0d07f6fda0d6 00:23:35.655 15:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:35.655 { 00:23:35.655 "name": "c1e5804d-3c33-4eb2-a321-0d07f6fda0d6", 00:23:35.655 "aliases": [ 00:23:35.655 "lvs/nvme0n1p0" 00:23:35.655 ], 00:23:35.655 "product_name": "Logical Volume", 00:23:35.655 "block_size": 4096, 00:23:35.655 "num_blocks": 26476544, 00:23:35.655 "uuid": "c1e5804d-3c33-4eb2-a321-0d07f6fda0d6", 00:23:35.655 "assigned_rate_limits": { 00:23:35.655 "rw_ios_per_sec": 0, 00:23:35.655 "rw_mbytes_per_sec": 0, 00:23:35.655 "r_mbytes_per_sec": 0, 00:23:35.655 "w_mbytes_per_sec": 0 00:23:35.655 }, 00:23:35.655 "claimed": false, 00:23:35.655 "zoned": false, 00:23:35.655 "supported_io_types": { 00:23:35.655 "read": true, 00:23:35.655 "write": true, 00:23:35.655 "unmap": true, 00:23:35.655 "flush": false, 00:23:35.655 "reset": true, 00:23:35.655 "nvme_admin": false, 00:23:35.655 "nvme_io": false, 00:23:35.655 "nvme_io_md": false, 00:23:35.655 "write_zeroes": true, 00:23:35.655 "zcopy": false, 00:23:35.655 "get_zone_info": false, 00:23:35.655 "zone_management": false, 00:23:35.655 "zone_append": false, 00:23:35.655 "compare": false, 00:23:35.655 "compare_and_write": false, 00:23:35.655 "abort": false, 00:23:35.655 "seek_hole": true, 00:23:35.655 "seek_data": true, 00:23:35.655 "copy": false, 00:23:35.655 "nvme_iov_md": false 00:23:35.655 }, 00:23:35.655 "driver_specific": { 00:23:35.655 "lvol": { 00:23:35.655 "lvol_store_uuid": "a57df313-a1fd-457c-9ca1-8a108a6a3ac4", 00:23:35.655 "base_bdev": "nvme0n1", 00:23:35.655 "thin_provision": true, 00:23:35.655 "num_allocated_clusters": 0, 00:23:35.655 "snapshot": false, 00:23:35.655 "clone": false, 00:23:35.655 "esnap_clone": false 00:23:35.655 } 00:23:35.655 } 00:23:35.655 } 00:23:35.655 ]' 00:23:35.655 15:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:35.655 15:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:35.655 15:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:35.655 15:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:35.655 15:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:35.655 15:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:35.655 15:31:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:35.655 15:31:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:35.655 15:31:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:35.914 15:31:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:35.914 15:31:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:35.914 15:31:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size c1e5804d-3c33-4eb2-a321-0d07f6fda0d6 00:23:35.914 15:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=c1e5804d-3c33-4eb2-a321-0d07f6fda0d6 00:23:35.914 15:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:35.914 15:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:35.914 15:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:35.914 15:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c1e5804d-3c33-4eb2-a321-0d07f6fda0d6 00:23:36.172 15:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:36.172 { 00:23:36.172 "name": "c1e5804d-3c33-4eb2-a321-0d07f6fda0d6", 00:23:36.172 "aliases": [ 00:23:36.172 "lvs/nvme0n1p0" 00:23:36.172 ], 00:23:36.172 "product_name": "Logical Volume", 00:23:36.172 "block_size": 4096, 00:23:36.172 "num_blocks": 26476544, 00:23:36.172 "uuid": "c1e5804d-3c33-4eb2-a321-0d07f6fda0d6", 00:23:36.172 "assigned_rate_limits": { 00:23:36.172 "rw_ios_per_sec": 0, 00:23:36.172 "rw_mbytes_per_sec": 0, 00:23:36.172 "r_mbytes_per_sec": 0, 00:23:36.172 "w_mbytes_per_sec": 0 00:23:36.172 }, 00:23:36.172 "claimed": false, 00:23:36.172 "zoned": false, 00:23:36.172 "supported_io_types": { 00:23:36.172 "read": true, 00:23:36.172 "write": true, 00:23:36.172 "unmap": true, 00:23:36.172 "flush": false, 00:23:36.172 "reset": true, 00:23:36.172 "nvme_admin": false, 00:23:36.172 "nvme_io": false, 00:23:36.172 "nvme_io_md": false, 00:23:36.172 "write_zeroes": true, 00:23:36.172 "zcopy": false, 00:23:36.172 "get_zone_info": false, 00:23:36.172 "zone_management": false, 00:23:36.172 "zone_append": false, 00:23:36.172 "compare": false, 00:23:36.172 "compare_and_write": false, 00:23:36.172 "abort": false, 00:23:36.172 "seek_hole": true, 00:23:36.172 "seek_data": true, 00:23:36.172 "copy": false, 00:23:36.172 "nvme_iov_md": false 00:23:36.172 }, 00:23:36.172 "driver_specific": { 00:23:36.172 "lvol": { 00:23:36.172 "lvol_store_uuid": "a57df313-a1fd-457c-9ca1-8a108a6a3ac4", 00:23:36.172 "base_bdev": "nvme0n1", 00:23:36.172 "thin_provision": true, 00:23:36.172 "num_allocated_clusters": 0, 00:23:36.172 "snapshot": false, 00:23:36.172 "clone": false, 00:23:36.172 "esnap_clone": false 00:23:36.172 } 00:23:36.172 } 00:23:36.172 } 00:23:36.172 ]' 00:23:36.172 15:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:36.172 15:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:36.172 15:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:36.430 15:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:36.430 15:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:36.430 15:31:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:36.430 15:31:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:36.430 15:31:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:36.688 15:31:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:36.688 15:31:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size c1e5804d-3c33-4eb2-a321-0d07f6fda0d6 00:23:36.688 15:31:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=c1e5804d-3c33-4eb2-a321-0d07f6fda0d6 00:23:36.688 15:31:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:36.688 15:31:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:36.688 15:31:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:36.688 15:31:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c1e5804d-3c33-4eb2-a321-0d07f6fda0d6 00:23:36.946 15:31:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:36.946 { 00:23:36.946 "name": "c1e5804d-3c33-4eb2-a321-0d07f6fda0d6", 00:23:36.946 "aliases": [ 00:23:36.946 "lvs/nvme0n1p0" 00:23:36.946 ], 00:23:36.946 "product_name": "Logical Volume", 00:23:36.946 "block_size": 4096, 00:23:36.946 "num_blocks": 26476544, 00:23:36.946 "uuid": "c1e5804d-3c33-4eb2-a321-0d07f6fda0d6", 00:23:36.946 "assigned_rate_limits": { 00:23:36.946 "rw_ios_per_sec": 0, 00:23:36.946 "rw_mbytes_per_sec": 0, 00:23:36.946 "r_mbytes_per_sec": 0, 00:23:36.946 "w_mbytes_per_sec": 0 00:23:36.946 }, 00:23:36.946 "claimed": false, 00:23:36.946 "zoned": false, 00:23:36.946 "supported_io_types": { 00:23:36.946 "read": true, 00:23:36.946 "write": true, 00:23:36.946 "unmap": true, 00:23:36.946 "flush": false, 00:23:36.946 "reset": true, 00:23:36.946 "nvme_admin": false, 00:23:36.946 "nvme_io": false, 00:23:36.946 "nvme_io_md": false, 00:23:36.946 "write_zeroes": true, 00:23:36.946 "zcopy": false, 00:23:36.946 "get_zone_info": false, 00:23:36.946 "zone_management": false, 00:23:36.946 "zone_append": false, 00:23:36.946 "compare": false, 00:23:36.946 "compare_and_write": false, 00:23:36.946 "abort": false, 00:23:36.946 "seek_hole": true, 00:23:36.946 "seek_data": true, 00:23:36.946 "copy": false, 00:23:36.946 "nvme_iov_md": false 00:23:36.946 }, 00:23:36.946 "driver_specific": { 00:23:36.946 "lvol": { 00:23:36.946 "lvol_store_uuid": "a57df313-a1fd-457c-9ca1-8a108a6a3ac4", 00:23:36.946 "base_bdev": "nvme0n1", 00:23:36.946 "thin_provision": true, 00:23:36.946 "num_allocated_clusters": 0, 00:23:36.946 "snapshot": false, 00:23:36.946 "clone": false, 00:23:36.946 "esnap_clone": false 00:23:36.946 } 00:23:36.946 } 00:23:36.946 } 00:23:36.946 ]' 00:23:36.946 15:31:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:36.946 15:31:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:36.946 15:31:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:36.946 15:31:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:36.946 15:31:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:36.946 15:31:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:36.946 15:31:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:36.946 15:31:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c1e5804d-3c33-4eb2-a321-0d07f6fda0d6 --l2p_dram_limit 10' 00:23:36.946 15:31:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:36.946 15:31:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:36.946 15:31:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:36.946 15:31:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c1e5804d-3c33-4eb2-a321-0d07f6fda0d6 --l2p_dram_limit 10 -c nvc0n1p0 00:23:37.205 [2024-07-11 15:31:50.628213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.205 [2024-07-11 15:31:50.628287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:37.205 [2024-07-11 15:31:50.628308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:37.205 [2024-07-11 15:31:50.628321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.205 [2024-07-11 15:31:50.628414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.205 [2024-07-11 15:31:50.628451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:37.205 [2024-07-11 15:31:50.628464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:37.205 [2024-07-11 15:31:50.628477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.205 [2024-07-11 15:31:50.628507] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:37.205 [2024-07-11 15:31:50.629603] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:37.205 [2024-07-11 15:31:50.629638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.205 [2024-07-11 15:31:50.629659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:37.205 [2024-07-11 15:31:50.629672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.138 ms 00:23:37.205 [2024-07-11 15:31:50.629686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.205 [2024-07-11 15:31:50.629835] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID be74e12b-85f1-4690-bff6-741dff03bc7b 00:23:37.205 [2024-07-11 15:31:50.630994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.205 [2024-07-11 15:31:50.631092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:37.205 [2024-07-11 15:31:50.631112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:37.205 [2024-07-11 15:31:50.631124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.205 [2024-07-11 15:31:50.636178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.205 [2024-07-11 15:31:50.636225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:37.205 [2024-07-11 15:31:50.636265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.999 ms 00:23:37.205 [2024-07-11 15:31:50.636276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.205 [2024-07-11 15:31:50.636418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.205 [2024-07-11 15:31:50.636438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:37.205 [2024-07-11 15:31:50.636454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:23:37.205 [2024-07-11 15:31:50.636466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.205 [2024-07-11 15:31:50.636561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.205 [2024-07-11 15:31:50.636581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:37.205 [2024-07-11 15:31:50.636596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:37.205 [2024-07-11 15:31:50.636610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.205 [2024-07-11 15:31:50.636645] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:37.205 [2024-07-11 15:31:50.641247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.205 [2024-07-11 15:31:50.641288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:37.205 [2024-07-11 15:31:50.641320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.614 ms 00:23:37.205 [2024-07-11 15:31:50.641334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.205 [2024-07-11 15:31:50.641377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.205 [2024-07-11 15:31:50.641413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:37.205 [2024-07-11 15:31:50.641426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:37.205 [2024-07-11 15:31:50.641439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.205 [2024-07-11 15:31:50.641482] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:37.205 [2024-07-11 15:31:50.641646] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:37.205 [2024-07-11 15:31:50.641666] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:37.205 [2024-07-11 15:31:50.641687] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:37.205 [2024-07-11 15:31:50.641703] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:37.205 [2024-07-11 15:31:50.641719] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:37.205 [2024-07-11 15:31:50.641732] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:37.206 [2024-07-11 15:31:50.641760] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:37.206 [2024-07-11 15:31:50.641787] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:37.206 [2024-07-11 15:31:50.641801] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:37.206 [2024-07-11 15:31:50.641812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.206 [2024-07-11 15:31:50.641824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:37.206 [2024-07-11 15:31:50.641835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:23:37.206 [2024-07-11 15:31:50.641847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.206 [2024-07-11 15:31:50.641929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.206 [2024-07-11 15:31:50.641945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:37.206 [2024-07-11 15:31:50.641957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:37.206 [2024-07-11 15:31:50.641969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.206 [2024-07-11 15:31:50.642117] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:37.206 [2024-07-11 15:31:50.642145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:37.206 [2024-07-11 15:31:50.642170] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:37.206 [2024-07-11 15:31:50.642188] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:37.206 [2024-07-11 15:31:50.642200] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:37.206 [2024-07-11 15:31:50.642213] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:37.206 [2024-07-11 15:31:50.642224] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:37.206 [2024-07-11 15:31:50.642237] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:37.206 [2024-07-11 15:31:50.642249] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:37.206 [2024-07-11 15:31:50.642262] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:37.206 [2024-07-11 15:31:50.642273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:37.206 [2024-07-11 15:31:50.642286] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:37.206 [2024-07-11 15:31:50.642297] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:37.206 [2024-07-11 15:31:50.642312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:37.206 [2024-07-11 15:31:50.642323] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:37.206 [2024-07-11 15:31:50.642336] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:37.206 [2024-07-11 15:31:50.642347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:37.206 [2024-07-11 15:31:50.642362] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:37.206 [2024-07-11 15:31:50.642375] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:37.206 [2024-07-11 15:31:50.642389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:37.206 [2024-07-11 15:31:50.642400] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:37.206 [2024-07-11 15:31:50.642412] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:37.206 [2024-07-11 15:31:50.642423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:37.206 [2024-07-11 15:31:50.642436] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:37.206 [2024-07-11 15:31:50.642446] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:37.206 [2024-07-11 15:31:50.642459] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:37.206 [2024-07-11 15:31:50.642470] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:37.206 [2024-07-11 15:31:50.642482] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:37.206 [2024-07-11 15:31:50.642493] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:37.206 [2024-07-11 15:31:50.642506] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:37.206 [2024-07-11 15:31:50.642516] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:37.206 [2024-07-11 15:31:50.642529] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:37.206 [2024-07-11 15:31:50.642540] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:37.206 [2024-07-11 15:31:50.642554] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:37.206 [2024-07-11 15:31:50.642565] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:37.206 [2024-07-11 15:31:50.642578] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:37.206 [2024-07-11 15:31:50.642588] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:37.206 [2024-07-11 15:31:50.642601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:37.206 [2024-07-11 15:31:50.642612] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:37.206 [2024-07-11 15:31:50.642626] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:37.206 [2024-07-11 15:31:50.642637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:37.206 [2024-07-11 15:31:50.642650] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:37.206 [2024-07-11 15:31:50.642661] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:37.206 [2024-07-11 15:31:50.642673] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:37.206 [2024-07-11 15:31:50.642685] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:37.206 [2024-07-11 15:31:50.642698] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:37.206 [2024-07-11 15:31:50.642725] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:37.206 [2024-07-11 15:31:50.642738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:37.206 [2024-07-11 15:31:50.642748] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:37.206 [2024-07-11 15:31:50.642762] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:37.206 [2024-07-11 15:31:50.642773] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:37.206 [2024-07-11 15:31:50.642785] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:37.206 [2024-07-11 15:31:50.642795] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:37.206 [2024-07-11 15:31:50.642812] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:37.206 [2024-07-11 15:31:50.642825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:37.206 [2024-07-11 15:31:50.642843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:37.206 [2024-07-11 15:31:50.642855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:37.206 [2024-07-11 15:31:50.642867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:37.206 [2024-07-11 15:31:50.642879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:37.206 [2024-07-11 15:31:50.642891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:37.206 [2024-07-11 15:31:50.642902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:37.206 [2024-07-11 15:31:50.642915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:37.206 [2024-07-11 15:31:50.642926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:37.206 [2024-07-11 15:31:50.642940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:37.206 [2024-07-11 15:31:50.642951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:37.206 [2024-07-11 15:31:50.642966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:37.206 [2024-07-11 15:31:50.642977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:37.206 [2024-07-11 15:31:50.642990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:37.206 [2024-07-11 15:31:50.643001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:37.206 [2024-07-11 15:31:50.643014] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:37.206 [2024-07-11 15:31:50.643041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:37.206 [2024-07-11 15:31:50.643070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:37.206 [2024-07-11 15:31:50.643084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:37.206 [2024-07-11 15:31:50.643097] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:37.206 [2024-07-11 15:31:50.643109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:37.206 [2024-07-11 15:31:50.643123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.206 [2024-07-11 15:31:50.643136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:37.206 [2024-07-11 15:31:50.643149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.112 ms 00:23:37.206 [2024-07-11 15:31:50.643161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.206 [2024-07-11 15:31:50.643215] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:37.206 [2024-07-11 15:31:50.643232] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:40.516 [2024-07-11 15:31:53.998221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.516 [2024-07-11 15:31:53.998288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:40.516 [2024-07-11 15:31:53.998313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3355.018 ms 00:23:40.516 [2024-07-11 15:31:53.998325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.516 [2024-07-11 15:31:54.026259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.516 [2024-07-11 15:31:54.026317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:40.516 [2024-07-11 15:31:54.026341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.684 ms 00:23:40.516 [2024-07-11 15:31:54.026354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.516 [2024-07-11 15:31:54.026554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.516 [2024-07-11 15:31:54.026572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:40.516 [2024-07-11 15:31:54.026585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:40.516 [2024-07-11 15:31:54.026600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.516 [2024-07-11 15:31:54.059463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.516 [2024-07-11 15:31:54.059510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:40.516 [2024-07-11 15:31:54.059546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.813 ms 00:23:40.516 [2024-07-11 15:31:54.059556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.516 [2024-07-11 15:31:54.059602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.516 [2024-07-11 15:31:54.059623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:40.516 [2024-07-11 15:31:54.059636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:40.516 [2024-07-11 15:31:54.059646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.516 [2024-07-11 15:31:54.059993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.516 [2024-07-11 15:31:54.060011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:40.516 [2024-07-11 15:31:54.060061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:23:40.516 [2024-07-11 15:31:54.060074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.516 [2024-07-11 15:31:54.060222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.516 [2024-07-11 15:31:54.060239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:40.516 [2024-07-11 15:31:54.060256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:23:40.516 [2024-07-11 15:31:54.060266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.516 [2024-07-11 15:31:54.076798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.516 [2024-07-11 15:31:54.076847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:40.516 [2024-07-11 15:31:54.076884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.504 ms 00:23:40.516 [2024-07-11 15:31:54.076896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.516 [2024-07-11 15:31:54.090839] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:40.516 [2024-07-11 15:31:54.093663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.516 [2024-07-11 15:31:54.093700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:40.516 [2024-07-11 15:31:54.093749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.670 ms 00:23:40.516 [2024-07-11 15:31:54.093777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.774 [2024-07-11 15:31:54.218707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.774 [2024-07-11 15:31:54.218793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:40.774 [2024-07-11 15:31:54.218815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 124.877 ms 00:23:40.774 [2024-07-11 15:31:54.218828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.774 [2024-07-11 15:31:54.219009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.774 [2024-07-11 15:31:54.219031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:40.774 [2024-07-11 15:31:54.219081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:23:40.774 [2024-07-11 15:31:54.219097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.774 [2024-07-11 15:31:54.247278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.774 [2024-07-11 15:31:54.247336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:40.774 [2024-07-11 15:31:54.247354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.106 ms 00:23:40.774 [2024-07-11 15:31:54.247367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.774 [2024-07-11 15:31:54.274824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.774 [2024-07-11 15:31:54.274881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:40.774 [2024-07-11 15:31:54.274900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.426 ms 00:23:40.774 [2024-07-11 15:31:54.274912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.774 [2024-07-11 15:31:54.275648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.774 [2024-07-11 15:31:54.275683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:40.774 [2024-07-11 15:31:54.275698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:23:40.774 [2024-07-11 15:31:54.275729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.774 [2024-07-11 15:31:54.364138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.774 [2024-07-11 15:31:54.364217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:40.774 [2024-07-11 15:31:54.364240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.350 ms 00:23:40.774 [2024-07-11 15:31:54.364258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.033 [2024-07-11 15:31:54.396777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.033 [2024-07-11 15:31:54.396831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:41.033 [2024-07-11 15:31:54.396851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.464 ms 00:23:41.033 [2024-07-11 15:31:54.396866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.033 [2024-07-11 15:31:54.425791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.033 [2024-07-11 15:31:54.425848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:41.033 [2024-07-11 15:31:54.425865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.874 ms 00:23:41.033 [2024-07-11 15:31:54.425877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.033 [2024-07-11 15:31:54.454944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.033 [2024-07-11 15:31:54.454991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:41.033 [2024-07-11 15:31:54.455009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.024 ms 00:23:41.033 [2024-07-11 15:31:54.455051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.033 [2024-07-11 15:31:54.455158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.033 [2024-07-11 15:31:54.455182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:41.033 [2024-07-11 15:31:54.455195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:41.033 [2024-07-11 15:31:54.455210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.033 [2024-07-11 15:31:54.455317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.033 [2024-07-11 15:31:54.455340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:41.033 [2024-07-11 15:31:54.455371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:41.033 [2024-07-11 15:31:54.455399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.033 [2024-07-11 15:31:54.456561] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3827.823 ms, result 0 00:23:41.033 { 00:23:41.033 "name": "ftl0", 00:23:41.033 "uuid": "be74e12b-85f1-4690-bff6-741dff03bc7b" 00:23:41.033 } 00:23:41.033 15:31:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:41.033 15:31:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:41.292 15:31:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:41.292 15:31:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:41.292 15:31:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:41.552 /dev/nbd0 00:23:41.552 15:31:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:41.552 15:31:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:41.552 15:31:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # local i 00:23:41.552 15:31:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:41.552 15:31:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:41.552 15:31:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:41.552 15:31:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # break 00:23:41.552 15:31:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:41.552 15:31:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:41.552 15:31:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:41.552 1+0 records in 00:23:41.552 1+0 records out 00:23:41.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613829 s, 6.7 MB/s 00:23:41.552 15:31:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:41.552 15:31:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # size=4096 00:23:41.552 15:31:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:41.552 15:31:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:41.552 15:31:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # return 0 00:23:41.552 15:31:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:41.812 [2024-07-11 15:31:55.169807] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:41.812 [2024-07-11 15:31:55.169975] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83119 ] 00:23:41.812 [2024-07-11 15:31:55.345625] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.070 [2024-07-11 15:31:55.600863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.275  Copying: 189/1024 [MB] (189 MBps) Copying: 381/1024 [MB] (191 MBps) Copying: 570/1024 [MB] (189 MBps) Copying: 755/1024 [MB] (184 MBps) Copying: 946/1024 [MB] (190 MBps) Copying: 1024/1024 [MB] (average 187 MBps) 00:23:49.275 00:23:49.275 15:32:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:51.190 15:32:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:51.190 [2024-07-11 15:32:04.796262] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:51.190 [2024-07-11 15:32:04.796439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83219 ] 00:23:51.448 [2024-07-11 15:32:04.969868] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.705 [2024-07-11 15:32:05.170891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.426  Copying: 13/1024 [MB] (13 MBps) Copying: 27/1024 [MB] (13 MBps) Copying: 42/1024 [MB] (14 MBps) Copying: 55/1024 [MB] (13 MBps) Copying: 69/1024 [MB] (13 MBps) Copying: 84/1024 [MB] (14 MBps) Copying: 98/1024 [MB] (14 MBps) Copying: 114/1024 [MB] (15 MBps) Copying: 129/1024 [MB] (15 MBps) Copying: 144/1024 [MB] (15 MBps) Copying: 159/1024 [MB] (14 MBps) Copying: 174/1024 [MB] (15 MBps) Copying: 189/1024 [MB] (14 MBps) Copying: 204/1024 [MB] (15 MBps) Copying: 219/1024 [MB] (15 MBps) Copying: 234/1024 [MB] (14 MBps) Copying: 248/1024 [MB] (14 MBps) Copying: 264/1024 [MB] (15 MBps) Copying: 279/1024 [MB] (15 MBps) Copying: 294/1024 [MB] (14 MBps) Copying: 309/1024 [MB] (14 MBps) Copying: 324/1024 [MB] (15 MBps) Copying: 339/1024 [MB] (14 MBps) Copying: 354/1024 [MB] (14 MBps) Copying: 369/1024 [MB] (15 MBps) Copying: 384/1024 [MB] (14 MBps) Copying: 399/1024 [MB] (15 MBps) Copying: 414/1024 [MB] (14 MBps) Copying: 429/1024 [MB] (14 MBps) Copying: 444/1024 [MB] (14 MBps) Copying: 458/1024 [MB] (14 MBps) Copying: 473/1024 [MB] (14 MBps) Copying: 488/1024 [MB] (15 MBps) Copying: 503/1024 [MB] (14 MBps) Copying: 517/1024 [MB] (14 MBps) Copying: 532/1024 [MB] (14 MBps) Copying: 546/1024 [MB] (14 MBps) Copying: 561/1024 [MB] (14 MBps) Copying: 576/1024 [MB] (14 MBps) Copying: 591/1024 [MB] (15 MBps) Copying: 606/1024 [MB] (15 MBps) Copying: 621/1024 [MB] (14 MBps) Copying: 636/1024 [MB] (14 MBps) Copying: 651/1024 [MB] (14 MBps) Copying: 666/1024 [MB] (15 MBps) Copying: 682/1024 [MB] (15 MBps) Copying: 698/1024 [MB] (15 MBps) Copying: 713/1024 [MB] (15 MBps) Copying: 728/1024 [MB] (15 MBps) Copying: 744/1024 [MB] (15 MBps) Copying: 759/1024 [MB] (14 MBps) Copying: 774/1024 [MB] (14 MBps) Copying: 789/1024 [MB] (15 MBps) Copying: 805/1024 [MB] (15 MBps) Copying: 820/1024 [MB] (15 MBps) Copying: 835/1024 [MB] (15 MBps) Copying: 850/1024 [MB] (15 MBps) Copying: 865/1024 [MB] (15 MBps) Copying: 881/1024 [MB] (15 MBps) Copying: 896/1024 [MB] (15 MBps) Copying: 911/1024 [MB] (14 MBps) Copying: 926/1024 [MB] (15 MBps) Copying: 941/1024 [MB] (14 MBps) Copying: 955/1024 [MB] (14 MBps) Copying: 971/1024 [MB] (15 MBps) Copying: 986/1024 [MB] (15 MBps) Copying: 1002/1024 [MB] (15 MBps) Copying: 1017/1024 [MB] (14 MBps) Copying: 1024/1024 [MB] (average 14 MBps) 00:25:01.426 00:25:01.426 15:33:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:01.426 15:33:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:01.684 15:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:01.941 [2024-07-11 15:33:15.426137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.941 [2024-07-11 15:33:15.426201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:01.941 [2024-07-11 15:33:15.426254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:01.941 [2024-07-11 15:33:15.426276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.941 [2024-07-11 15:33:15.426320] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:01.941 [2024-07-11 15:33:15.429724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.941 [2024-07-11 15:33:15.429918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:01.941 [2024-07-11 15:33:15.430085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.349 ms 00:25:01.941 [2024-07-11 15:33:15.430147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.941 [2024-07-11 15:33:15.432164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.941 [2024-07-11 15:33:15.432346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:01.941 [2024-07-11 15:33:15.432479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.871 ms 00:25:01.941 [2024-07-11 15:33:15.432540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.941 [2024-07-11 15:33:15.448467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.941 [2024-07-11 15:33:15.448562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:01.941 [2024-07-11 15:33:15.448581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.771 ms 00:25:01.941 [2024-07-11 15:33:15.448594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.941 [2024-07-11 15:33:15.454523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.941 [2024-07-11 15:33:15.454597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:01.941 [2024-07-11 15:33:15.454613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.876 ms 00:25:01.941 [2024-07-11 15:33:15.454625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.941 [2024-07-11 15:33:15.481521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.941 [2024-07-11 15:33:15.481590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:01.941 [2024-07-11 15:33:15.481609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.769 ms 00:25:01.941 [2024-07-11 15:33:15.481622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.941 [2024-07-11 15:33:15.498733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.941 [2024-07-11 15:33:15.498799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:01.941 [2024-07-11 15:33:15.498821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.066 ms 00:25:01.941 [2024-07-11 15:33:15.498834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.941 [2024-07-11 15:33:15.499025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.941 [2024-07-11 15:33:15.499105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:01.941 [2024-07-11 15:33:15.499119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:25:01.941 [2024-07-11 15:33:15.499132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.941 [2024-07-11 15:33:15.527401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.941 [2024-07-11 15:33:15.527450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:01.941 [2024-07-11 15:33:15.527469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.228 ms 00:25:01.941 [2024-07-11 15:33:15.527483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.200 [2024-07-11 15:33:15.556242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.200 [2024-07-11 15:33:15.556324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:02.200 [2024-07-11 15:33:15.556360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.680 ms 00:25:02.200 [2024-07-11 15:33:15.556374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.200 [2024-07-11 15:33:15.585176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.200 [2024-07-11 15:33:15.585263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:02.200 [2024-07-11 15:33:15.585283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.739 ms 00:25:02.200 [2024-07-11 15:33:15.585296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.200 [2024-07-11 15:33:15.614706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.200 [2024-07-11 15:33:15.614754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:02.200 [2024-07-11 15:33:15.614787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.245 ms 00:25:02.200 [2024-07-11 15:33:15.614800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.200 [2024-07-11 15:33:15.614844] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:02.200 [2024-07-11 15:33:15.614870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:02.200 [2024-07-11 15:33:15.614884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:02.200 [2024-07-11 15:33:15.614897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:02.200 [2024-07-11 15:33:15.614908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:02.200 [2024-07-11 15:33:15.614922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:02.200 [2024-07-11 15:33:15.614932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:02.200 [2024-07-11 15:33:15.614945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:02.200 [2024-07-11 15:33:15.614956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.614987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.615995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:02.201 [2024-07-11 15:33:15.616381] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:02.201 [2024-07-11 15:33:15.616394] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: be74e12b-85f1-4690-bff6-741dff03bc7b 00:25:02.201 [2024-07-11 15:33:15.616408] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:02.201 [2024-07-11 15:33:15.616419] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:02.202 [2024-07-11 15:33:15.616456] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:02.202 [2024-07-11 15:33:15.616468] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:02.202 [2024-07-11 15:33:15.616482] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:02.202 [2024-07-11 15:33:15.616494] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:02.202 [2024-07-11 15:33:15.616507] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:02.202 [2024-07-11 15:33:15.616518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:02.202 [2024-07-11 15:33:15.616531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:02.202 [2024-07-11 15:33:15.616543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.202 [2024-07-11 15:33:15.616558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:02.202 [2024-07-11 15:33:15.616571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.700 ms 00:25:02.202 [2024-07-11 15:33:15.616584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.202 [2024-07-11 15:33:15.631859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.202 [2024-07-11 15:33:15.631904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:02.202 [2024-07-11 15:33:15.631938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.211 ms 00:25:02.202 [2024-07-11 15:33:15.631950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.202 [2024-07-11 15:33:15.632466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:02.202 [2024-07-11 15:33:15.632520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:02.202 [2024-07-11 15:33:15.632537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 00:25:02.202 [2024-07-11 15:33:15.632550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.202 [2024-07-11 15:33:15.678500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.202 [2024-07-11 15:33:15.678562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:02.202 [2024-07-11 15:33:15.678597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.202 [2024-07-11 15:33:15.678610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.202 [2024-07-11 15:33:15.678685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.202 [2024-07-11 15:33:15.678703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:02.202 [2024-07-11 15:33:15.678715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.202 [2024-07-11 15:33:15.678727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.202 [2024-07-11 15:33:15.678846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.202 [2024-07-11 15:33:15.678872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:02.202 [2024-07-11 15:33:15.678883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.202 [2024-07-11 15:33:15.678895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.202 [2024-07-11 15:33:15.678926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.202 [2024-07-11 15:33:15.678946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:02.202 [2024-07-11 15:33:15.678957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.202 [2024-07-11 15:33:15.678969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.202 [2024-07-11 15:33:15.766907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.202 [2024-07-11 15:33:15.766975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:02.202 [2024-07-11 15:33:15.767011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.202 [2024-07-11 15:33:15.767023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.460 [2024-07-11 15:33:15.847848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.460 [2024-07-11 15:33:15.847925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:02.460 [2024-07-11 15:33:15.847944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.460 [2024-07-11 15:33:15.847957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.460 [2024-07-11 15:33:15.848096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.460 [2024-07-11 15:33:15.848121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:02.460 [2024-07-11 15:33:15.848137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.460 [2024-07-11 15:33:15.848165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.460 [2024-07-11 15:33:15.848227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.460 [2024-07-11 15:33:15.848250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:02.460 [2024-07-11 15:33:15.848263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.460 [2024-07-11 15:33:15.848281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.460 [2024-07-11 15:33:15.848429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.460 [2024-07-11 15:33:15.848464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:02.460 [2024-07-11 15:33:15.848483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.460 [2024-07-11 15:33:15.848500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.460 [2024-07-11 15:33:15.848554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.460 [2024-07-11 15:33:15.848583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:02.460 [2024-07-11 15:33:15.848597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.460 [2024-07-11 15:33:15.848610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.460 [2024-07-11 15:33:15.848658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.460 [2024-07-11 15:33:15.848677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:02.460 [2024-07-11 15:33:15.848697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.460 [2024-07-11 15:33:15.848715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.460 [2024-07-11 15:33:15.848773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:02.460 [2024-07-11 15:33:15.848800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:02.460 [2024-07-11 15:33:15.848813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:02.460 [2024-07-11 15:33:15.848826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:02.460 [2024-07-11 15:33:15.849015] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 422.847 ms, result 0 00:25:02.460 true 00:25:02.460 15:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 82970 00:25:02.460 15:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid82970 00:25:02.460 15:33:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:02.460 [2024-07-11 15:33:15.960862] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:02.460 [2024-07-11 15:33:15.961004] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83929 ] 00:25:02.718 [2024-07-11 15:33:16.121455] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.718 [2024-07-11 15:33:16.281465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.469  Copying: 188/1024 [MB] (188 MBps) Copying: 378/1024 [MB] (189 MBps) Copying: 576/1024 [MB] (198 MBps) Copying: 766/1024 [MB] (189 MBps) Copying: 950/1024 [MB] (184 MBps) Copying: 1024/1024 [MB] (average 189 MBps) 00:25:09.469 00:25:09.469 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 82970 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:09.469 15:33:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:09.469 [2024-07-11 15:33:23.007998] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:09.469 [2024-07-11 15:33:23.008183] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83994 ] 00:25:09.726 [2024-07-11 15:33:23.169468] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.727 [2024-07-11 15:33:23.330784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.292 [2024-07-11 15:33:23.623032] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:10.292 [2024-07-11 15:33:23.623167] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:10.292 [2024-07-11 15:33:23.688830] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:10.292 [2024-07-11 15:33:23.689153] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:10.292 [2024-07-11 15:33:23.689397] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:10.551 [2024-07-11 15:33:23.981953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.551 [2024-07-11 15:33:23.982007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:10.551 [2024-07-11 15:33:23.982117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:10.551 [2024-07-11 15:33:23.982130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.551 [2024-07-11 15:33:23.982207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.551 [2024-07-11 15:33:23.982229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:10.551 [2024-07-11 15:33:23.982242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:25:10.551 [2024-07-11 15:33:23.982257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.551 [2024-07-11 15:33:23.982290] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:10.551 [2024-07-11 15:33:23.983279] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:10.551 [2024-07-11 15:33:23.983318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.551 [2024-07-11 15:33:23.983332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:10.551 [2024-07-11 15:33:23.983345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.035 ms 00:25:10.551 [2024-07-11 15:33:23.983355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.551 [2024-07-11 15:33:23.984421] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:10.551 [2024-07-11 15:33:24.000712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.551 [2024-07-11 15:33:24.000781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:10.551 [2024-07-11 15:33:24.000815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.292 ms 00:25:10.551 [2024-07-11 15:33:24.000832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.551 [2024-07-11 15:33:24.000894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.551 [2024-07-11 15:33:24.000912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:10.551 [2024-07-11 15:33:24.000924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:25:10.551 [2024-07-11 15:33:24.000935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.551 [2024-07-11 15:33:24.005386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.551 [2024-07-11 15:33:24.005429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:10.551 [2024-07-11 15:33:24.005451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.371 ms 00:25:10.551 [2024-07-11 15:33:24.005463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.551 [2024-07-11 15:33:24.005553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.551 [2024-07-11 15:33:24.005571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:10.551 [2024-07-11 15:33:24.005584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:10.551 [2024-07-11 15:33:24.005595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.551 [2024-07-11 15:33:24.005654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.551 [2024-07-11 15:33:24.005672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:10.551 [2024-07-11 15:33:24.005700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:10.551 [2024-07-11 15:33:24.005729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.551 [2024-07-11 15:33:24.005776] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:10.551 [2024-07-11 15:33:24.010011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.551 [2024-07-11 15:33:24.010098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:10.551 [2024-07-11 15:33:24.010116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.244 ms 00:25:10.551 [2024-07-11 15:33:24.010128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.551 [2024-07-11 15:33:24.010177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.551 [2024-07-11 15:33:24.010194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:10.551 [2024-07-11 15:33:24.010208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:10.551 [2024-07-11 15:33:24.010220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.551 [2024-07-11 15:33:24.010263] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:10.551 [2024-07-11 15:33:24.010294] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:10.551 [2024-07-11 15:33:24.010357] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:10.551 [2024-07-11 15:33:24.010392] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:25:10.551 [2024-07-11 15:33:24.010507] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:10.551 [2024-07-11 15:33:24.010522] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:10.551 [2024-07-11 15:33:24.010536] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:10.551 [2024-07-11 15:33:24.010550] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:10.551 [2024-07-11 15:33:24.010562] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:10.551 [2024-07-11 15:33:24.010573] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:10.551 [2024-07-11 15:33:24.010588] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:10.551 [2024-07-11 15:33:24.010599] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:10.551 [2024-07-11 15:33:24.010608] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:10.551 [2024-07-11 15:33:24.010619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.551 [2024-07-11 15:33:24.010630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:10.551 [2024-07-11 15:33:24.010641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:25:10.551 [2024-07-11 15:33:24.010666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.551 [2024-07-11 15:33:24.010742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.551 [2024-07-11 15:33:24.010755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:10.551 [2024-07-11 15:33:24.010766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:10.551 [2024-07-11 15:33:24.010776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.551 [2024-07-11 15:33:24.010875] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:10.551 [2024-07-11 15:33:24.010891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:10.551 [2024-07-11 15:33:24.010903] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:10.551 [2024-07-11 15:33:24.010914] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.551 [2024-07-11 15:33:24.010924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:10.551 [2024-07-11 15:33:24.010933] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:10.551 [2024-07-11 15:33:24.010943] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:10.551 [2024-07-11 15:33:24.010954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:10.551 [2024-07-11 15:33:24.010963] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:10.551 [2024-07-11 15:33:24.010972] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:10.551 [2024-07-11 15:33:24.010982] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:10.551 [2024-07-11 15:33:24.010991] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:10.551 [2024-07-11 15:33:24.011000] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:10.551 [2024-07-11 15:33:24.011009] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:10.551 [2024-07-11 15:33:24.011019] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:10.551 [2024-07-11 15:33:24.011028] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.551 [2024-07-11 15:33:24.011050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:10.551 [2024-07-11 15:33:24.011062] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:10.552 [2024-07-11 15:33:24.011071] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.552 [2024-07-11 15:33:24.011119] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:10.552 [2024-07-11 15:33:24.011131] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:10.552 [2024-07-11 15:33:24.011140] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:10.552 [2024-07-11 15:33:24.011150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:10.552 [2024-07-11 15:33:24.011160] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:10.552 [2024-07-11 15:33:24.011169] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:10.552 [2024-07-11 15:33:24.011178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:10.552 [2024-07-11 15:33:24.011188] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:10.552 [2024-07-11 15:33:24.011198] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:10.552 [2024-07-11 15:33:24.011207] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:10.552 [2024-07-11 15:33:24.011217] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:10.552 [2024-07-11 15:33:24.011226] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:10.552 [2024-07-11 15:33:24.011236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:10.552 [2024-07-11 15:33:24.011246] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:10.552 [2024-07-11 15:33:24.011255] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:10.552 [2024-07-11 15:33:24.011264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:10.552 [2024-07-11 15:33:24.011274] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:10.552 [2024-07-11 15:33:24.011284] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:10.552 [2024-07-11 15:33:24.011293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:10.552 [2024-07-11 15:33:24.011302] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:10.552 [2024-07-11 15:33:24.011328] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.552 [2024-07-11 15:33:24.011338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:10.552 [2024-07-11 15:33:24.011348] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:10.552 [2024-07-11 15:33:24.011360] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.552 [2024-07-11 15:33:24.011369] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:10.552 [2024-07-11 15:33:24.011381] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:10.552 [2024-07-11 15:33:24.011392] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:10.552 [2024-07-11 15:33:24.011403] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:10.552 [2024-07-11 15:33:24.011414] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:10.552 [2024-07-11 15:33:24.011424] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:10.552 [2024-07-11 15:33:24.011435] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:10.552 [2024-07-11 15:33:24.011445] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:10.552 [2024-07-11 15:33:24.011470] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:10.552 [2024-07-11 15:33:24.011480] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:10.552 [2024-07-11 15:33:24.011492] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:10.552 [2024-07-11 15:33:24.011510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:10.552 [2024-07-11 15:33:24.011530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:10.552 [2024-07-11 15:33:24.011541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:10.552 [2024-07-11 15:33:24.011552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:10.552 [2024-07-11 15:33:24.011563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:10.552 [2024-07-11 15:33:24.011574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:10.552 [2024-07-11 15:33:24.011585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:10.552 [2024-07-11 15:33:24.011595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:10.552 [2024-07-11 15:33:24.011606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:10.552 [2024-07-11 15:33:24.011617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:10.552 [2024-07-11 15:33:24.011628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:10.552 [2024-07-11 15:33:24.011639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:10.552 [2024-07-11 15:33:24.011650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:10.552 [2024-07-11 15:33:24.011660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:10.552 [2024-07-11 15:33:24.011686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:10.552 [2024-07-11 15:33:24.011697] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:10.552 [2024-07-11 15:33:24.011723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:10.552 [2024-07-11 15:33:24.011735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:10.552 [2024-07-11 15:33:24.011746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:10.552 [2024-07-11 15:33:24.011756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:10.552 [2024-07-11 15:33:24.011766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:10.552 [2024-07-11 15:33:24.011777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.552 [2024-07-11 15:33:24.011788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:10.552 [2024-07-11 15:33:24.011799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:25:10.552 [2024-07-11 15:33:24.011809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.552 [2024-07-11 15:33:24.053733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.552 [2024-07-11 15:33:24.053971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:10.552 [2024-07-11 15:33:24.054137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.849 ms 00:25:10.552 [2024-07-11 15:33:24.054204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.552 [2024-07-11 15:33:24.054476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.552 [2024-07-11 15:33:24.054527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:10.552 [2024-07-11 15:33:24.054567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:25:10.552 [2024-07-11 15:33:24.054669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.552 [2024-07-11 15:33:24.090688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.552 [2024-07-11 15:33:24.090946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:10.552 [2024-07-11 15:33:24.091099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.884 ms 00:25:10.552 [2024-07-11 15:33:24.091154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.552 [2024-07-11 15:33:24.091257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.552 [2024-07-11 15:33:24.091334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:10.552 [2024-07-11 15:33:24.091386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:10.552 [2024-07-11 15:33:24.091432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.552 [2024-07-11 15:33:24.091853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.552 [2024-07-11 15:33:24.091930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:10.552 [2024-07-11 15:33:24.092103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:25:10.552 [2024-07-11 15:33:24.092154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.552 [2024-07-11 15:33:24.092354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.552 [2024-07-11 15:33:24.092433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:10.552 [2024-07-11 15:33:24.092550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:25:10.552 [2024-07-11 15:33:24.092684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.552 [2024-07-11 15:33:24.107135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.552 [2024-07-11 15:33:24.107345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:10.552 [2024-07-11 15:33:24.107469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.361 ms 00:25:10.552 [2024-07-11 15:33:24.107519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.552 [2024-07-11 15:33:24.121923] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:10.552 [2024-07-11 15:33:24.121959] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:10.552 [2024-07-11 15:33:24.121975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.552 [2024-07-11 15:33:24.121985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:10.552 [2024-07-11 15:33:24.121997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.307 ms 00:25:10.552 [2024-07-11 15:33:24.122006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.552 [2024-07-11 15:33:24.149044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.552 [2024-07-11 15:33:24.149372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:10.552 [2024-07-11 15:33:24.149494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.937 ms 00:25:10.552 [2024-07-11 15:33:24.149545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.552 [2024-07-11 15:33:24.164024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.552 [2024-07-11 15:33:24.164252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:10.552 [2024-07-11 15:33:24.164411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.372 ms 00:25:10.552 [2024-07-11 15:33:24.164465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.811 [2024-07-11 15:33:24.178846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.811 [2024-07-11 15:33:24.179075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:10.811 [2024-07-11 15:33:24.179243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.205 ms 00:25:10.811 [2024-07-11 15:33:24.179267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.811 [2024-07-11 15:33:24.180064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.811 [2024-07-11 15:33:24.180123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:10.811 [2024-07-11 15:33:24.180161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.658 ms 00:25:10.811 [2024-07-11 15:33:24.180172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.811 [2024-07-11 15:33:24.242704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.811 [2024-07-11 15:33:24.242775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:10.811 [2024-07-11 15:33:24.242811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.507 ms 00:25:10.811 [2024-07-11 15:33:24.242822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.811 [2024-07-11 15:33:24.255591] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:10.811 [2024-07-11 15:33:24.258478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.811 [2024-07-11 15:33:24.258504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:10.811 [2024-07-11 15:33:24.258527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.588 ms 00:25:10.811 [2024-07-11 15:33:24.258537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.811 [2024-07-11 15:33:24.258654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.811 [2024-07-11 15:33:24.258673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:10.811 [2024-07-11 15:33:24.258690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:10.811 [2024-07-11 15:33:24.258701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.811 [2024-07-11 15:33:24.258798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.811 [2024-07-11 15:33:24.258815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:10.811 [2024-07-11 15:33:24.258827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:10.811 [2024-07-11 15:33:24.258837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.811 [2024-07-11 15:33:24.258883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.811 [2024-07-11 15:33:24.258898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:10.811 [2024-07-11 15:33:24.258925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:10.811 [2024-07-11 15:33:24.258941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.811 [2024-07-11 15:33:24.258978] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:10.811 [2024-07-11 15:33:24.258994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.811 [2024-07-11 15:33:24.259005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:10.811 [2024-07-11 15:33:24.259017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:10.811 [2024-07-11 15:33:24.259027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.811 [2024-07-11 15:33:24.287525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.811 [2024-07-11 15:33:24.287565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:10.811 [2024-07-11 15:33:24.287604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.475 ms 00:25:10.811 [2024-07-11 15:33:24.287618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.811 [2024-07-11 15:33:24.287689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.811 [2024-07-11 15:33:24.287722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:10.811 [2024-07-11 15:33:24.287733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:10.811 [2024-07-11 15:33:24.287744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.811 [2024-07-11 15:33:24.288905] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 306.406 ms, result 0 00:25:54.915  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (24 MBps) Copying: 71/1024 [MB] (23 MBps) Copying: 95/1024 [MB] (24 MBps) Copying: 119/1024 [MB] (24 MBps) Copying: 143/1024 [MB] (24 MBps) Copying: 167/1024 [MB] (23 MBps) Copying: 191/1024 [MB] (24 MBps) Copying: 215/1024 [MB] (24 MBps) Copying: 239/1024 [MB] (24 MBps) Copying: 263/1024 [MB] (23 MBps) Copying: 288/1024 [MB] (24 MBps) Copying: 312/1024 [MB] (24 MBps) Copying: 336/1024 [MB] (24 MBps) Copying: 361/1024 [MB] (25 MBps) Copying: 386/1024 [MB] (24 MBps) Copying: 410/1024 [MB] (24 MBps) Copying: 434/1024 [MB] (23 MBps) Copying: 457/1024 [MB] (23 MBps) Copying: 481/1024 [MB] (23 MBps) Copying: 505/1024 [MB] (23 MBps) Copying: 529/1024 [MB] (24 MBps) Copying: 553/1024 [MB] (23 MBps) Copying: 577/1024 [MB] (23 MBps) Copying: 601/1024 [MB] (24 MBps) Copying: 625/1024 [MB] (24 MBps) Copying: 649/1024 [MB] (23 MBps) Copying: 673/1024 [MB] (23 MBps) Copying: 697/1024 [MB] (23 MBps) Copying: 720/1024 [MB] (23 MBps) Copying: 744/1024 [MB] (23 MBps) Copying: 767/1024 [MB] (23 MBps) Copying: 791/1024 [MB] (23 MBps) Copying: 815/1024 [MB] (24 MBps) Copying: 839/1024 [MB] (24 MBps) Copying: 863/1024 [MB] (24 MBps) Copying: 887/1024 [MB] (23 MBps) Copying: 911/1024 [MB] (24 MBps) Copying: 935/1024 [MB] (24 MBps) Copying: 959/1024 [MB] (23 MBps) Copying: 982/1024 [MB] (23 MBps) Copying: 1007/1024 [MB] (24 MBps) Copying: 1023/1024 [MB] (16 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-11 15:34:08.220995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.915 [2024-07-11 15:34:08.221116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:54.915 [2024-07-11 15:34:08.221141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:54.916 [2024-07-11 15:34:08.221155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.916 [2024-07-11 15:34:08.223464] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:54.916 [2024-07-11 15:34:08.230194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.916 [2024-07-11 15:34:08.230236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:54.916 [2024-07-11 15:34:08.230256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.682 ms 00:25:54.916 [2024-07-11 15:34:08.230267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.916 [2024-07-11 15:34:08.242667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.916 [2024-07-11 15:34:08.242712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:54.916 [2024-07-11 15:34:08.242738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.299 ms 00:25:54.916 [2024-07-11 15:34:08.242749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.916 [2024-07-11 15:34:08.264713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.916 [2024-07-11 15:34:08.264757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:54.916 [2024-07-11 15:34:08.264791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.942 ms 00:25:54.916 [2024-07-11 15:34:08.264803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.916 [2024-07-11 15:34:08.270905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.916 [2024-07-11 15:34:08.270938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:54.916 [2024-07-11 15:34:08.270952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.065 ms 00:25:54.916 [2024-07-11 15:34:08.270970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.916 [2024-07-11 15:34:08.299116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.916 [2024-07-11 15:34:08.299172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:54.916 [2024-07-11 15:34:08.299190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.074 ms 00:25:54.916 [2024-07-11 15:34:08.299200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.916 [2024-07-11 15:34:08.315787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.916 [2024-07-11 15:34:08.315827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:54.916 [2024-07-11 15:34:08.315844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.546 ms 00:25:54.916 [2024-07-11 15:34:08.315854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.916 [2024-07-11 15:34:08.429917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.916 [2024-07-11 15:34:08.429975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:54.916 [2024-07-11 15:34:08.429995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.018 ms 00:25:54.916 [2024-07-11 15:34:08.430006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.916 [2024-07-11 15:34:08.457930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.916 [2024-07-11 15:34:08.457968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:54.916 [2024-07-11 15:34:08.457985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.859 ms 00:25:54.916 [2024-07-11 15:34:08.457995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.916 [2024-07-11 15:34:08.484898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.916 [2024-07-11 15:34:08.484936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:54.916 [2024-07-11 15:34:08.484951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.798 ms 00:25:54.916 [2024-07-11 15:34:08.484961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.916 [2024-07-11 15:34:08.513399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.916 [2024-07-11 15:34:08.513442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:54.916 [2024-07-11 15:34:08.513460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.399 ms 00:25:54.916 [2024-07-11 15:34:08.513471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.176 [2024-07-11 15:34:08.544045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.176 [2024-07-11 15:34:08.544096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:55.176 [2024-07-11 15:34:08.544129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.486 ms 00:25:55.176 [2024-07-11 15:34:08.544139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.176 [2024-07-11 15:34:08.544180] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:55.176 [2024-07-11 15:34:08.544223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130304 / 261120 wr_cnt: 1 state: open 00:25:55.176 [2024-07-11 15:34:08.544237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.544994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.545005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.545017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.545028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.545041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.545052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.545063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.545075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.545086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.545098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.545109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.545131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.545145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.545157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.545169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.545180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.545192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:55.176 [2024-07-11 15:34:08.545204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:55.177 [2024-07-11 15:34:08.545598] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:55.177 [2024-07-11 15:34:08.545608] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: be74e12b-85f1-4690-bff6-741dff03bc7b 00:25:55.177 [2024-07-11 15:34:08.545619] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130304 00:25:55.177 [2024-07-11 15:34:08.545630] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 131264 00:25:55.177 [2024-07-11 15:34:08.545652] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130304 00:25:55.177 [2024-07-11 15:34:08.545666] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:25:55.177 [2024-07-11 15:34:08.545676] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:55.177 [2024-07-11 15:34:08.545686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:55.177 [2024-07-11 15:34:08.545696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:55.177 [2024-07-11 15:34:08.545706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:55.177 [2024-07-11 15:34:08.545714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:55.177 [2024-07-11 15:34:08.545725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.177 [2024-07-11 15:34:08.545735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:55.177 [2024-07-11 15:34:08.545772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.546 ms 00:25:55.177 [2024-07-11 15:34:08.545783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.177 [2024-07-11 15:34:08.560550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.177 [2024-07-11 15:34:08.560730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:55.177 [2024-07-11 15:34:08.560846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.732 ms 00:25:55.177 [2024-07-11 15:34:08.560894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.177 [2024-07-11 15:34:08.561339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.177 [2024-07-11 15:34:08.561495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:55.177 [2024-07-11 15:34:08.561608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:25:55.177 [2024-07-11 15:34:08.561723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.177 [2024-07-11 15:34:08.594203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.177 [2024-07-11 15:34:08.594428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:55.177 [2024-07-11 15:34:08.594580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.177 [2024-07-11 15:34:08.594629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.177 [2024-07-11 15:34:08.594733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.177 [2024-07-11 15:34:08.594820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:55.177 [2024-07-11 15:34:08.594866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.177 [2024-07-11 15:34:08.594904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.177 [2024-07-11 15:34:08.595012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.177 [2024-07-11 15:34:08.595089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:55.177 [2024-07-11 15:34:08.595205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.177 [2024-07-11 15:34:08.595250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.177 [2024-07-11 15:34:08.595317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.177 [2024-07-11 15:34:08.595359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:55.177 [2024-07-11 15:34:08.595397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.177 [2024-07-11 15:34:08.595436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.177 [2024-07-11 15:34:08.680112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.177 [2024-07-11 15:34:08.680177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:55.177 [2024-07-11 15:34:08.680196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.177 [2024-07-11 15:34:08.680206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.177 [2024-07-11 15:34:08.753157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.177 [2024-07-11 15:34:08.753219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:55.177 [2024-07-11 15:34:08.753237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.177 [2024-07-11 15:34:08.753247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.177 [2024-07-11 15:34:08.753333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.177 [2024-07-11 15:34:08.753348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:55.177 [2024-07-11 15:34:08.753368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.177 [2024-07-11 15:34:08.753378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.177 [2024-07-11 15:34:08.753416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.177 [2024-07-11 15:34:08.753429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:55.177 [2024-07-11 15:34:08.753440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.177 [2024-07-11 15:34:08.753450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.177 [2024-07-11 15:34:08.753554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.177 [2024-07-11 15:34:08.753571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:55.177 [2024-07-11 15:34:08.753582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.177 [2024-07-11 15:34:08.753598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.177 [2024-07-11 15:34:08.753643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.177 [2024-07-11 15:34:08.753660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:55.177 [2024-07-11 15:34:08.753672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.177 [2024-07-11 15:34:08.753681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.177 [2024-07-11 15:34:08.753735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.177 [2024-07-11 15:34:08.753749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:55.177 [2024-07-11 15:34:08.753760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.177 [2024-07-11 15:34:08.753775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.177 [2024-07-11 15:34:08.753823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.177 [2024-07-11 15:34:08.753839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:55.177 [2024-07-11 15:34:08.753850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.177 [2024-07-11 15:34:08.753860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.177 [2024-07-11 15:34:08.753992] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 534.075 ms, result 0 00:25:56.570 00:25:56.570 00:25:56.570 15:34:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:59.105 15:34:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:59.105 [2024-07-11 15:34:12.217934] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:59.105 [2024-07-11 15:34:12.218158] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84475 ] 00:25:59.105 [2024-07-11 15:34:12.393953] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.105 [2024-07-11 15:34:12.602265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.364 [2024-07-11 15:34:12.880847] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:59.364 [2024-07-11 15:34:12.880938] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:59.623 [2024-07-11 15:34:13.039328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.623 [2024-07-11 15:34:13.039383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:59.623 [2024-07-11 15:34:13.039417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:59.623 [2024-07-11 15:34:13.039427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.623 [2024-07-11 15:34:13.039493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.623 [2024-07-11 15:34:13.039513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:59.623 [2024-07-11 15:34:13.039524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:59.623 [2024-07-11 15:34:13.039537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.623 [2024-07-11 15:34:13.039565] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:59.623 [2024-07-11 15:34:13.040585] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:59.623 [2024-07-11 15:34:13.040632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.623 [2024-07-11 15:34:13.040652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:59.623 [2024-07-11 15:34:13.040681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:25:59.623 [2024-07-11 15:34:13.040692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.623 [2024-07-11 15:34:13.041960] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:59.623 [2024-07-11 15:34:13.056443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.623 [2024-07-11 15:34:13.056484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:59.623 [2024-07-11 15:34:13.056516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.485 ms 00:25:59.623 [2024-07-11 15:34:13.056527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.623 [2024-07-11 15:34:13.056594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.623 [2024-07-11 15:34:13.056612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:59.623 [2024-07-11 15:34:13.056627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:59.623 [2024-07-11 15:34:13.056637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.623 [2024-07-11 15:34:13.060884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.623 [2024-07-11 15:34:13.060924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:59.623 [2024-07-11 15:34:13.060954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.151 ms 00:25:59.623 [2024-07-11 15:34:13.060964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.623 [2024-07-11 15:34:13.061100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.623 [2024-07-11 15:34:13.061124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:59.623 [2024-07-11 15:34:13.061135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:25:59.623 [2024-07-11 15:34:13.061146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.623 [2024-07-11 15:34:13.061221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.623 [2024-07-11 15:34:13.061238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:59.623 [2024-07-11 15:34:13.061250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:59.623 [2024-07-11 15:34:13.061261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.623 [2024-07-11 15:34:13.061293] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:59.623 [2024-07-11 15:34:13.065287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.623 [2024-07-11 15:34:13.065320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:59.623 [2024-07-11 15:34:13.065349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.002 ms 00:25:59.623 [2024-07-11 15:34:13.065359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.623 [2024-07-11 15:34:13.065401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.623 [2024-07-11 15:34:13.065416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:59.623 [2024-07-11 15:34:13.065427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:59.623 [2024-07-11 15:34:13.065436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.623 [2024-07-11 15:34:13.065477] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:59.623 [2024-07-11 15:34:13.065505] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:59.623 [2024-07-11 15:34:13.065545] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:59.623 [2024-07-11 15:34:13.065565] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:25:59.623 [2024-07-11 15:34:13.065659] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:59.623 [2024-07-11 15:34:13.065673] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:59.623 [2024-07-11 15:34:13.065686] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:59.623 [2024-07-11 15:34:13.065699] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:59.623 [2024-07-11 15:34:13.065711] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:59.623 [2024-07-11 15:34:13.065722] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:59.623 [2024-07-11 15:34:13.065732] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:59.623 [2024-07-11 15:34:13.065741] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:59.623 [2024-07-11 15:34:13.065751] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:59.623 [2024-07-11 15:34:13.065762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.623 [2024-07-11 15:34:13.065776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:59.623 [2024-07-11 15:34:13.065787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:25:59.623 [2024-07-11 15:34:13.065797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.623 [2024-07-11 15:34:13.065875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.624 [2024-07-11 15:34:13.065888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:59.624 [2024-07-11 15:34:13.065899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:59.624 [2024-07-11 15:34:13.065908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.624 [2024-07-11 15:34:13.066006] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:59.624 [2024-07-11 15:34:13.066084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:59.624 [2024-07-11 15:34:13.066104] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:59.624 [2024-07-11 15:34:13.066115] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.624 [2024-07-11 15:34:13.066126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:59.624 [2024-07-11 15:34:13.066137] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:59.624 [2024-07-11 15:34:13.066148] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:59.624 [2024-07-11 15:34:13.066159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:59.624 [2024-07-11 15:34:13.066169] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:59.624 [2024-07-11 15:34:13.066179] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:59.624 [2024-07-11 15:34:13.066189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:59.624 [2024-07-11 15:34:13.066199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:59.624 [2024-07-11 15:34:13.066209] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:59.624 [2024-07-11 15:34:13.066219] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:59.624 [2024-07-11 15:34:13.066231] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:59.624 [2024-07-11 15:34:13.066240] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.624 [2024-07-11 15:34:13.066250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:59.624 [2024-07-11 15:34:13.066261] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:59.624 [2024-07-11 15:34:13.066270] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.624 [2024-07-11 15:34:13.066281] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:59.624 [2024-07-11 15:34:13.066319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:59.624 [2024-07-11 15:34:13.066329] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:59.624 [2024-07-11 15:34:13.066339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:59.624 [2024-07-11 15:34:13.066363] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:59.624 [2024-07-11 15:34:13.066373] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:59.624 [2024-07-11 15:34:13.066382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:59.624 [2024-07-11 15:34:13.066407] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:59.624 [2024-07-11 15:34:13.066431] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:59.624 [2024-07-11 15:34:13.066456] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:59.624 [2024-07-11 15:34:13.066465] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:59.624 [2024-07-11 15:34:13.066475] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:59.624 [2024-07-11 15:34:13.066484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:59.624 [2024-07-11 15:34:13.066494] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:59.624 [2024-07-11 15:34:13.066519] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:59.624 [2024-07-11 15:34:13.066529] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:59.624 [2024-07-11 15:34:13.066538] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:59.624 [2024-07-11 15:34:13.066548] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:59.624 [2024-07-11 15:34:13.066557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:59.624 [2024-07-11 15:34:13.066568] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:59.624 [2024-07-11 15:34:13.066577] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.624 [2024-07-11 15:34:13.066587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:59.624 [2024-07-11 15:34:13.066597] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:59.624 [2024-07-11 15:34:13.066607] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.624 [2024-07-11 15:34:13.066617] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:59.624 [2024-07-11 15:34:13.066628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:59.624 [2024-07-11 15:34:13.066638] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:59.624 [2024-07-11 15:34:13.066650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:59.624 [2024-07-11 15:34:13.066661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:59.624 [2024-07-11 15:34:13.066672] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:59.624 [2024-07-11 15:34:13.066682] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:59.624 [2024-07-11 15:34:13.066692] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:59.624 [2024-07-11 15:34:13.066701] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:59.624 [2024-07-11 15:34:13.066712] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:59.624 [2024-07-11 15:34:13.066723] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:59.624 [2024-07-11 15:34:13.066752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:59.624 [2024-07-11 15:34:13.066764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:59.624 [2024-07-11 15:34:13.066775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:59.624 [2024-07-11 15:34:13.066786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:59.624 [2024-07-11 15:34:13.066797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:59.624 [2024-07-11 15:34:13.066807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:59.624 [2024-07-11 15:34:13.066818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:59.624 [2024-07-11 15:34:13.066829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:59.624 [2024-07-11 15:34:13.066840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:59.624 [2024-07-11 15:34:13.066851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:59.624 [2024-07-11 15:34:13.066862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:59.624 [2024-07-11 15:34:13.066872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:59.624 [2024-07-11 15:34:13.066883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:59.624 [2024-07-11 15:34:13.066893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:59.624 [2024-07-11 15:34:13.066904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:59.624 [2024-07-11 15:34:13.066915] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:59.624 [2024-07-11 15:34:13.066928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:59.624 [2024-07-11 15:34:13.066940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:59.624 [2024-07-11 15:34:13.066951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:59.624 [2024-07-11 15:34:13.066962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:59.624 [2024-07-11 15:34:13.066973] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:59.624 [2024-07-11 15:34:13.066985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.624 [2024-07-11 15:34:13.067002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:59.624 [2024-07-11 15:34:13.067014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:25:59.624 [2024-07-11 15:34:13.067038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.624 [2024-07-11 15:34:13.110496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.624 [2024-07-11 15:34:13.110562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:59.624 [2024-07-11 15:34:13.110614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.370 ms 00:25:59.624 [2024-07-11 15:34:13.110625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.624 [2024-07-11 15:34:13.110754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.624 [2024-07-11 15:34:13.110769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:59.624 [2024-07-11 15:34:13.110788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:25:59.624 [2024-07-11 15:34:13.110798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.624 [2024-07-11 15:34:13.148597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.624 [2024-07-11 15:34:13.148668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:59.624 [2024-07-11 15:34:13.148701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.712 ms 00:25:59.624 [2024-07-11 15:34:13.148712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.624 [2024-07-11 15:34:13.148778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.624 [2024-07-11 15:34:13.148793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:59.624 [2024-07-11 15:34:13.148805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:59.624 [2024-07-11 15:34:13.148815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.624 [2024-07-11 15:34:13.149220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.624 [2024-07-11 15:34:13.149238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:59.624 [2024-07-11 15:34:13.149251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:25:59.624 [2024-07-11 15:34:13.149262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.624 [2024-07-11 15:34:13.149430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.624 [2024-07-11 15:34:13.149450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:59.624 [2024-07-11 15:34:13.149492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:25:59.624 [2024-07-11 15:34:13.149521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.624 [2024-07-11 15:34:13.164456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.624 [2024-07-11 15:34:13.164496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:59.624 [2024-07-11 15:34:13.164526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.908 ms 00:25:59.624 [2024-07-11 15:34:13.164537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.624 [2024-07-11 15:34:13.179230] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:59.624 [2024-07-11 15:34:13.179272] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:59.624 [2024-07-11 15:34:13.179306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.624 [2024-07-11 15:34:13.179317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:59.624 [2024-07-11 15:34:13.179329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.637 ms 00:25:59.624 [2024-07-11 15:34:13.179338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.624 [2024-07-11 15:34:13.206118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.624 [2024-07-11 15:34:13.206160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:59.624 [2024-07-11 15:34:13.206192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.736 ms 00:25:59.624 [2024-07-11 15:34:13.206210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.624 [2024-07-11 15:34:13.220521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.624 [2024-07-11 15:34:13.220557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:59.624 [2024-07-11 15:34:13.220588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.263 ms 00:25:59.624 [2024-07-11 15:34:13.220598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.624 [2024-07-11 15:34:13.235482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.624 [2024-07-11 15:34:13.235519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:59.624 [2024-07-11 15:34:13.235550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.844 ms 00:25:59.624 [2024-07-11 15:34:13.235561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.624 [2024-07-11 15:34:13.236402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.624 [2024-07-11 15:34:13.236455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:59.624 [2024-07-11 15:34:13.236470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:25:59.624 [2024-07-11 15:34:13.236481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.883 [2024-07-11 15:34:13.306990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.883 [2024-07-11 15:34:13.307072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:59.883 [2024-07-11 15:34:13.307094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.483 ms 00:25:59.883 [2024-07-11 15:34:13.307106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.883 [2024-07-11 15:34:13.319689] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:59.883 [2024-07-11 15:34:13.322369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.883 [2024-07-11 15:34:13.322436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:59.883 [2024-07-11 15:34:13.322453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.185 ms 00:25:59.883 [2024-07-11 15:34:13.322463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.883 [2024-07-11 15:34:13.322570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.883 [2024-07-11 15:34:13.322589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:59.883 [2024-07-11 15:34:13.322601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:59.883 [2024-07-11 15:34:13.322611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.883 [2024-07-11 15:34:13.324281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.883 [2024-07-11 15:34:13.324322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:59.883 [2024-07-11 15:34:13.324337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.621 ms 00:25:59.883 [2024-07-11 15:34:13.324348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.883 [2024-07-11 15:34:13.324394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.883 [2024-07-11 15:34:13.324423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:59.883 [2024-07-11 15:34:13.324434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:59.883 [2024-07-11 15:34:13.324444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.883 [2024-07-11 15:34:13.324484] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:59.883 [2024-07-11 15:34:13.324500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.883 [2024-07-11 15:34:13.324511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:59.883 [2024-07-11 15:34:13.324527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:59.883 [2024-07-11 15:34:13.324538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.883 [2024-07-11 15:34:13.355106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.883 [2024-07-11 15:34:13.355150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:59.883 [2024-07-11 15:34:13.355184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.546 ms 00:25:59.883 [2024-07-11 15:34:13.355195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.883 [2024-07-11 15:34:13.355276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.883 [2024-07-11 15:34:13.355304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:59.883 [2024-07-11 15:34:13.355317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:59.883 [2024-07-11 15:34:13.355329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.883 [2024-07-11 15:34:13.362784] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 321.696 ms, result 0 00:26:37.796  Copying: 856/1048576 [kB] (856 kBps) Copying: 4468/1048576 [kB] (3612 kBps) Copying: 25/1024 [MB] (21 MBps) Copying: 54/1024 [MB] (29 MBps) Copying: 84/1024 [MB] (29 MBps) Copying: 113/1024 [MB] (29 MBps) Copying: 143/1024 [MB] (29 MBps) Copying: 172/1024 [MB] (29 MBps) Copying: 202/1024 [MB] (29 MBps) Copying: 231/1024 [MB] (29 MBps) Copying: 261/1024 [MB] (29 MBps) Copying: 290/1024 [MB] (29 MBps) Copying: 319/1024 [MB] (29 MBps) Copying: 349/1024 [MB] (29 MBps) Copying: 378/1024 [MB] (29 MBps) Copying: 407/1024 [MB] (29 MBps) Copying: 436/1024 [MB] (29 MBps) Copying: 465/1024 [MB] (28 MBps) Copying: 494/1024 [MB] (28 MBps) Copying: 523/1024 [MB] (28 MBps) Copying: 552/1024 [MB] (29 MBps) Copying: 580/1024 [MB] (28 MBps) Copying: 609/1024 [MB] (29 MBps) Copying: 638/1024 [MB] (28 MBps) Copying: 665/1024 [MB] (27 MBps) Copying: 694/1024 [MB] (29 MBps) Copying: 723/1024 [MB] (28 MBps) Copying: 752/1024 [MB] (28 MBps) Copying: 781/1024 [MB] (29 MBps) Copying: 810/1024 [MB] (28 MBps) Copying: 838/1024 [MB] (28 MBps) Copying: 867/1024 [MB] (28 MBps) Copying: 896/1024 [MB] (28 MBps) Copying: 924/1024 [MB] (28 MBps) Copying: 953/1024 [MB] (29 MBps) Copying: 983/1024 [MB] (29 MBps) Copying: 1012/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-11 15:34:51.318254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.796 [2024-07-11 15:34:51.318349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:37.796 [2024-07-11 15:34:51.318372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:37.796 [2024-07-11 15:34:51.318384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.796 [2024-07-11 15:34:51.318427] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:37.796 [2024-07-11 15:34:51.321814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.796 [2024-07-11 15:34:51.321840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:37.796 [2024-07-11 15:34:51.321854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.364 ms 00:26:37.796 [2024-07-11 15:34:51.321865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.796 [2024-07-11 15:34:51.322148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.796 [2024-07-11 15:34:51.322169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:37.796 [2024-07-11 15:34:51.322182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:26:37.796 [2024-07-11 15:34:51.322193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.796 [2024-07-11 15:34:51.332921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.796 [2024-07-11 15:34:51.332964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:37.796 [2024-07-11 15:34:51.332982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.698 ms 00:26:37.796 [2024-07-11 15:34:51.332995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.796 [2024-07-11 15:34:51.339198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.796 [2024-07-11 15:34:51.339226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:37.796 [2024-07-11 15:34:51.339239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.144 ms 00:26:37.796 [2024-07-11 15:34:51.339250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.796 [2024-07-11 15:34:51.369770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.796 [2024-07-11 15:34:51.369834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:37.796 [2024-07-11 15:34:51.369852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.446 ms 00:26:37.796 [2024-07-11 15:34:51.369864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.796 [2024-07-11 15:34:51.387200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.796 [2024-07-11 15:34:51.387275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:37.796 [2024-07-11 15:34:51.387291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.267 ms 00:26:37.796 [2024-07-11 15:34:51.387303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.796 [2024-07-11 15:34:51.391359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.796 [2024-07-11 15:34:51.391521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:37.796 [2024-07-11 15:34:51.391662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.006 ms 00:26:37.796 [2024-07-11 15:34:51.391714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.056 [2024-07-11 15:34:51.423007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.056 [2024-07-11 15:34:51.423239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:38.056 [2024-07-11 15:34:51.423401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.240 ms 00:26:38.056 [2024-07-11 15:34:51.423457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.056 [2024-07-11 15:34:51.453248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.056 [2024-07-11 15:34:51.453445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:38.056 [2024-07-11 15:34:51.453604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.624 ms 00:26:38.056 [2024-07-11 15:34:51.453658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.056 [2024-07-11 15:34:51.482251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.056 [2024-07-11 15:34:51.482440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:38.056 [2024-07-11 15:34:51.482569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.428 ms 00:26:38.056 [2024-07-11 15:34:51.482649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.056 [2024-07-11 15:34:51.511235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.056 [2024-07-11 15:34:51.511507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:38.056 [2024-07-11 15:34:51.511627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.462 ms 00:26:38.056 [2024-07-11 15:34:51.511677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.056 [2024-07-11 15:34:51.511761] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:38.056 [2024-07-11 15:34:51.511906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:38.056 [2024-07-11 15:34:51.511972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:26:38.056 [2024-07-11 15:34:51.512072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.512140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.512279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.512341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.512505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.512576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.512630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.512743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.512795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.512915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.513063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.513183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.513302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.513321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.513336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.513348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.513360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.513372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.513383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.513394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.513407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.513418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.513429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:38.056 [2024-07-11 15:34:51.513440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.513992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.514004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.514327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.514497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.514581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.514649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.514702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.514756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.514888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.514948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.515002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.515097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.515114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.515126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.515137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.515148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.515160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.515171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.515182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.515193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.515205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.515216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.515227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.515239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.515250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.515261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.515272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:38.057 [2024-07-11 15:34:51.515293] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:38.057 [2024-07-11 15:34:51.515305] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: be74e12b-85f1-4690-bff6-741dff03bc7b 00:26:38.058 [2024-07-11 15:34:51.515317] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:26:38.058 [2024-07-11 15:34:51.515327] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 136384 00:26:38.058 [2024-07-11 15:34:51.515337] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 134400 00:26:38.058 [2024-07-11 15:34:51.515360] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 00:26:38.058 [2024-07-11 15:34:51.515371] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:38.058 [2024-07-11 15:34:51.515386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:38.058 [2024-07-11 15:34:51.515397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:38.058 [2024-07-11 15:34:51.515406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:38.058 [2024-07-11 15:34:51.515416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:38.058 [2024-07-11 15:34:51.515428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.058 [2024-07-11 15:34:51.515444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:38.058 [2024-07-11 15:34:51.515456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.668 ms 00:26:38.058 [2024-07-11 15:34:51.515467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.058 [2024-07-11 15:34:51.530996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.058 [2024-07-11 15:34:51.531042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:38.058 [2024-07-11 15:34:51.531068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.473 ms 00:26:38.058 [2024-07-11 15:34:51.531099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.058 [2024-07-11 15:34:51.531558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.058 [2024-07-11 15:34:51.531583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:38.058 [2024-07-11 15:34:51.531597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:26:38.058 [2024-07-11 15:34:51.531608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.058 [2024-07-11 15:34:51.568705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.058 [2024-07-11 15:34:51.568753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:38.058 [2024-07-11 15:34:51.568775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.058 [2024-07-11 15:34:51.568786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.058 [2024-07-11 15:34:51.568861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.058 [2024-07-11 15:34:51.568875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:38.058 [2024-07-11 15:34:51.568886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.058 [2024-07-11 15:34:51.568896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.058 [2024-07-11 15:34:51.568988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.058 [2024-07-11 15:34:51.569006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:38.058 [2024-07-11 15:34:51.569053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.058 [2024-07-11 15:34:51.569091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.058 [2024-07-11 15:34:51.569114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.058 [2024-07-11 15:34:51.569127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:38.058 [2024-07-11 15:34:51.569138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.058 [2024-07-11 15:34:51.569165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.058 [2024-07-11 15:34:51.663763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.058 [2024-07-11 15:34:51.663832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:38.058 [2024-07-11 15:34:51.663857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.058 [2024-07-11 15:34:51.663869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.316 [2024-07-11 15:34:51.743748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.316 [2024-07-11 15:34:51.743812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:38.316 [2024-07-11 15:34:51.743829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.316 [2024-07-11 15:34:51.743840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.316 [2024-07-11 15:34:51.743910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.316 [2024-07-11 15:34:51.743926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:38.316 [2024-07-11 15:34:51.743936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.316 [2024-07-11 15:34:51.743946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.317 [2024-07-11 15:34:51.743995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.317 [2024-07-11 15:34:51.744008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:38.317 [2024-07-11 15:34:51.744316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.317 [2024-07-11 15:34:51.744376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.317 [2024-07-11 15:34:51.744516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.317 [2024-07-11 15:34:51.744536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:38.317 [2024-07-11 15:34:51.744550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.317 [2024-07-11 15:34:51.744561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.317 [2024-07-11 15:34:51.744619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.317 [2024-07-11 15:34:51.744637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:38.317 [2024-07-11 15:34:51.744663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.317 [2024-07-11 15:34:51.744674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.317 [2024-07-11 15:34:51.744719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.317 [2024-07-11 15:34:51.744734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:38.317 [2024-07-11 15:34:51.744746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.317 [2024-07-11 15:34:51.744756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.317 [2024-07-11 15:34:51.744811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:38.317 [2024-07-11 15:34:51.744829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:38.317 [2024-07-11 15:34:51.744840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:38.317 [2024-07-11 15:34:51.744851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.317 [2024-07-11 15:34:51.744989] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 426.712 ms, result 0 00:26:39.288 00:26:39.288 00:26:39.288 15:34:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:41.195 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:41.195 15:34:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:41.453 [2024-07-11 15:34:54.887298] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:41.453 [2024-07-11 15:34:54.887930] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84900 ] 00:26:41.453 [2024-07-11 15:34:55.056549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.711 [2024-07-11 15:34:55.269875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.970 [2024-07-11 15:34:55.558214] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:41.970 [2024-07-11 15:34:55.558307] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:42.229 [2024-07-11 15:34:55.717948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.229 [2024-07-11 15:34:55.718006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:42.229 [2024-07-11 15:34:55.718085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:42.229 [2024-07-11 15:34:55.718098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.229 [2024-07-11 15:34:55.718174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.229 [2024-07-11 15:34:55.718211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:42.229 [2024-07-11 15:34:55.718225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:26:42.229 [2024-07-11 15:34:55.718242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.229 [2024-07-11 15:34:55.718285] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:42.229 [2024-07-11 15:34:55.719256] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:42.229 [2024-07-11 15:34:55.719299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.229 [2024-07-11 15:34:55.719317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:42.229 [2024-07-11 15:34:55.719330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:26:42.229 [2024-07-11 15:34:55.719341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.230 [2024-07-11 15:34:55.720589] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:42.230 [2024-07-11 15:34:55.735520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.230 [2024-07-11 15:34:55.735562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:42.230 [2024-07-11 15:34:55.735595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.932 ms 00:26:42.230 [2024-07-11 15:34:55.735606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.230 [2024-07-11 15:34:55.735684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.230 [2024-07-11 15:34:55.735703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:42.230 [2024-07-11 15:34:55.735719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:42.230 [2024-07-11 15:34:55.735729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.230 [2024-07-11 15:34:55.740503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.230 [2024-07-11 15:34:55.740544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:42.230 [2024-07-11 15:34:55.740574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.692 ms 00:26:42.230 [2024-07-11 15:34:55.740585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.230 [2024-07-11 15:34:55.740672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.230 [2024-07-11 15:34:55.740692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:42.230 [2024-07-11 15:34:55.740703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:26:42.230 [2024-07-11 15:34:55.740713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.230 [2024-07-11 15:34:55.740771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.230 [2024-07-11 15:34:55.740788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:42.230 [2024-07-11 15:34:55.740799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:42.230 [2024-07-11 15:34:55.740809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.230 [2024-07-11 15:34:55.740841] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:42.230 [2024-07-11 15:34:55.744857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.230 [2024-07-11 15:34:55.744893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:42.230 [2024-07-11 15:34:55.744923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.024 ms 00:26:42.230 [2024-07-11 15:34:55.744934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.230 [2024-07-11 15:34:55.744978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.230 [2024-07-11 15:34:55.744993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:42.230 [2024-07-11 15:34:55.745004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:42.230 [2024-07-11 15:34:55.745015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.230 [2024-07-11 15:34:55.745078] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:42.230 [2024-07-11 15:34:55.745109] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:42.230 [2024-07-11 15:34:55.745149] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:42.230 [2024-07-11 15:34:55.745171] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:42.230 [2024-07-11 15:34:55.745282] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:42.230 [2024-07-11 15:34:55.745297] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:42.230 [2024-07-11 15:34:55.745310] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:42.230 [2024-07-11 15:34:55.745323] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:42.230 [2024-07-11 15:34:55.745335] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:42.230 [2024-07-11 15:34:55.745347] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:42.230 [2024-07-11 15:34:55.745357] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:42.230 [2024-07-11 15:34:55.745366] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:42.230 [2024-07-11 15:34:55.745376] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:42.230 [2024-07-11 15:34:55.745387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.230 [2024-07-11 15:34:55.745402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:42.230 [2024-07-11 15:34:55.745413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:26:42.230 [2024-07-11 15:34:55.745423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.230 [2024-07-11 15:34:55.745512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.230 [2024-07-11 15:34:55.745526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:42.230 [2024-07-11 15:34:55.745538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:26:42.230 [2024-07-11 15:34:55.745548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.230 [2024-07-11 15:34:55.745657] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:42.230 [2024-07-11 15:34:55.745672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:42.230 [2024-07-11 15:34:55.745688] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:42.230 [2024-07-11 15:34:55.745699] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.230 [2024-07-11 15:34:55.745709] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:42.230 [2024-07-11 15:34:55.745717] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:42.230 [2024-07-11 15:34:55.745727] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:42.230 [2024-07-11 15:34:55.745736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:42.230 [2024-07-11 15:34:55.745747] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:42.230 [2024-07-11 15:34:55.745755] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:42.230 [2024-07-11 15:34:55.745765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:42.230 [2024-07-11 15:34:55.745773] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:42.230 [2024-07-11 15:34:55.745782] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:42.230 [2024-07-11 15:34:55.745794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:42.230 [2024-07-11 15:34:55.745804] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:42.230 [2024-07-11 15:34:55.745813] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.230 [2024-07-11 15:34:55.745822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:42.230 [2024-07-11 15:34:55.745831] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:42.230 [2024-07-11 15:34:55.745840] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.230 [2024-07-11 15:34:55.745849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:42.230 [2024-07-11 15:34:55.745871] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:42.230 [2024-07-11 15:34:55.745881] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.230 [2024-07-11 15:34:55.745890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:42.230 [2024-07-11 15:34:55.745899] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:42.230 [2024-07-11 15:34:55.745908] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.230 [2024-07-11 15:34:55.745917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:42.230 [2024-07-11 15:34:55.745926] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:42.230 [2024-07-11 15:34:55.745935] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.230 [2024-07-11 15:34:55.745943] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:42.230 [2024-07-11 15:34:55.745953] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:42.230 [2024-07-11 15:34:55.745962] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:42.230 [2024-07-11 15:34:55.745970] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:42.230 [2024-07-11 15:34:55.745979] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:42.230 [2024-07-11 15:34:55.745988] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:42.230 [2024-07-11 15:34:55.745997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:42.230 [2024-07-11 15:34:55.746006] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:42.230 [2024-07-11 15:34:55.746083] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:42.230 [2024-07-11 15:34:55.746094] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:42.230 [2024-07-11 15:34:55.746105] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:42.230 [2024-07-11 15:34:55.746115] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.230 [2024-07-11 15:34:55.746125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:42.230 [2024-07-11 15:34:55.746135] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:42.230 [2024-07-11 15:34:55.746146] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.230 [2024-07-11 15:34:55.746155] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:42.230 [2024-07-11 15:34:55.746166] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:42.230 [2024-07-11 15:34:55.746178] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:42.230 [2024-07-11 15:34:55.746192] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:42.230 [2024-07-11 15:34:55.746203] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:42.230 [2024-07-11 15:34:55.746213] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:42.230 [2024-07-11 15:34:55.746223] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:42.230 [2024-07-11 15:34:55.746233] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:42.230 [2024-07-11 15:34:55.746243] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:42.230 [2024-07-11 15:34:55.746252] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:42.230 [2024-07-11 15:34:55.746264] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:42.230 [2024-07-11 15:34:55.746283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:42.230 [2024-07-11 15:34:55.746296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:42.230 [2024-07-11 15:34:55.746306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:42.230 [2024-07-11 15:34:55.746334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:42.230 [2024-07-11 15:34:55.746345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:42.230 [2024-07-11 15:34:55.746356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:42.231 [2024-07-11 15:34:55.746367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:42.231 [2024-07-11 15:34:55.746378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:42.231 [2024-07-11 15:34:55.746389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:42.231 [2024-07-11 15:34:55.746399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:42.231 [2024-07-11 15:34:55.746410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:42.231 [2024-07-11 15:34:55.746422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:42.231 [2024-07-11 15:34:55.746433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:42.231 [2024-07-11 15:34:55.746444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:42.231 [2024-07-11 15:34:55.746470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:42.231 [2024-07-11 15:34:55.746481] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:42.231 [2024-07-11 15:34:55.746493] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:42.231 [2024-07-11 15:34:55.746504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:42.231 [2024-07-11 15:34:55.746516] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:42.231 [2024-07-11 15:34:55.746527] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:42.231 [2024-07-11 15:34:55.746537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:42.231 [2024-07-11 15:34:55.746549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.231 [2024-07-11 15:34:55.746566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:42.231 [2024-07-11 15:34:55.746579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:26:42.231 [2024-07-11 15:34:55.746590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.231 [2024-07-11 15:34:55.785590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.231 [2024-07-11 15:34:55.785665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:42.231 [2024-07-11 15:34:55.785701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.914 ms 00:26:42.231 [2024-07-11 15:34:55.785713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.231 [2024-07-11 15:34:55.785826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.231 [2024-07-11 15:34:55.785841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:42.231 [2024-07-11 15:34:55.785853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:26:42.231 [2024-07-11 15:34:55.785863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.231 [2024-07-11 15:34:55.821001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.231 [2024-07-11 15:34:55.821070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:42.231 [2024-07-11 15:34:55.821086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.055 ms 00:26:42.231 [2024-07-11 15:34:55.821096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.231 [2024-07-11 15:34:55.821153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.231 [2024-07-11 15:34:55.821167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:42.231 [2024-07-11 15:34:55.821179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:42.231 [2024-07-11 15:34:55.821188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.231 [2024-07-11 15:34:55.821570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.231 [2024-07-11 15:34:55.821588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:42.231 [2024-07-11 15:34:55.821600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:26:42.231 [2024-07-11 15:34:55.821610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.231 [2024-07-11 15:34:55.821762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.231 [2024-07-11 15:34:55.821779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:42.231 [2024-07-11 15:34:55.821790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:26:42.231 [2024-07-11 15:34:55.821800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.231 [2024-07-11 15:34:55.836215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.231 [2024-07-11 15:34:55.836253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:42.231 [2024-07-11 15:34:55.836285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.392 ms 00:26:42.231 [2024-07-11 15:34:55.836296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.490 [2024-07-11 15:34:55.852088] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:42.490 [2024-07-11 15:34:55.852132] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:42.490 [2024-07-11 15:34:55.852166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.490 [2024-07-11 15:34:55.852177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:42.490 [2024-07-11 15:34:55.852189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.756 ms 00:26:42.490 [2024-07-11 15:34:55.852199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.490 [2024-07-11 15:34:55.878326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.490 [2024-07-11 15:34:55.878381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:42.490 [2024-07-11 15:34:55.878415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.084 ms 00:26:42.490 [2024-07-11 15:34:55.878446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.490 [2024-07-11 15:34:55.892363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.490 [2024-07-11 15:34:55.892401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:42.490 [2024-07-11 15:34:55.892432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.867 ms 00:26:42.490 [2024-07-11 15:34:55.892442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.490 [2024-07-11 15:34:55.906352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.490 [2024-07-11 15:34:55.906392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:42.490 [2024-07-11 15:34:55.906422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.870 ms 00:26:42.490 [2024-07-11 15:34:55.906432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.490 [2024-07-11 15:34:55.907203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.490 [2024-07-11 15:34:55.907241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:42.490 [2024-07-11 15:34:55.907256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:26:42.490 [2024-07-11 15:34:55.907266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.490 [2024-07-11 15:34:55.971217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.490 [2024-07-11 15:34:55.971284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:42.490 [2024-07-11 15:34:55.971319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.925 ms 00:26:42.490 [2024-07-11 15:34:55.971330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.490 [2024-07-11 15:34:55.982549] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:42.490 [2024-07-11 15:34:55.984913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.490 [2024-07-11 15:34:55.984945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:42.490 [2024-07-11 15:34:55.984975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.506 ms 00:26:42.490 [2024-07-11 15:34:55.984985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.490 [2024-07-11 15:34:55.985112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.490 [2024-07-11 15:34:55.985132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:42.490 [2024-07-11 15:34:55.985145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:42.490 [2024-07-11 15:34:55.985155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.490 [2024-07-11 15:34:55.985892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.490 [2024-07-11 15:34:55.985931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:42.490 [2024-07-11 15:34:55.985946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:26:42.490 [2024-07-11 15:34:55.985956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.490 [2024-07-11 15:34:55.986006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.490 [2024-07-11 15:34:55.986097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:42.490 [2024-07-11 15:34:55.986110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:42.490 [2024-07-11 15:34:55.986122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.490 [2024-07-11 15:34:55.986165] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:42.490 [2024-07-11 15:34:55.986182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.490 [2024-07-11 15:34:55.986194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:42.490 [2024-07-11 15:34:55.986210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:42.490 [2024-07-11 15:34:55.986221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.490 [2024-07-11 15:34:56.013646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.490 [2024-07-11 15:34:56.013687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:42.490 [2024-07-11 15:34:56.013721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.398 ms 00:26:42.490 [2024-07-11 15:34:56.013731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.490 [2024-07-11 15:34:56.013804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.490 [2024-07-11 15:34:56.013829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:42.490 [2024-07-11 15:34:56.013840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:42.490 [2024-07-11 15:34:56.013851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.490 [2024-07-11 15:34:56.015176] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 296.661 ms, result 0 00:27:24.917  Copying: 25/1024 [MB] (25 MBps) Copying: 49/1024 [MB] (24 MBps) Copying: 74/1024 [MB] (24 MBps) Copying: 98/1024 [MB] (24 MBps) Copying: 122/1024 [MB] (24 MBps) Copying: 147/1024 [MB] (24 MBps) Copying: 171/1024 [MB] (24 MBps) Copying: 195/1024 [MB] (24 MBps) Copying: 220/1024 [MB] (24 MBps) Copying: 244/1024 [MB] (24 MBps) Copying: 269/1024 [MB] (24 MBps) Copying: 294/1024 [MB] (25 MBps) Copying: 319/1024 [MB] (25 MBps) Copying: 344/1024 [MB] (24 MBps) Copying: 369/1024 [MB] (25 MBps) Copying: 394/1024 [MB] (24 MBps) Copying: 419/1024 [MB] (25 MBps) Copying: 444/1024 [MB] (24 MBps) Copying: 469/1024 [MB] (24 MBps) Copying: 494/1024 [MB] (25 MBps) Copying: 519/1024 [MB] (25 MBps) Copying: 545/1024 [MB] (25 MBps) Copying: 570/1024 [MB] (25 MBps) Copying: 595/1024 [MB] (24 MBps) Copying: 620/1024 [MB] (24 MBps) Copying: 645/1024 [MB] (24 MBps) Copying: 669/1024 [MB] (24 MBps) Copying: 693/1024 [MB] (24 MBps) Copying: 716/1024 [MB] (23 MBps) Copying: 739/1024 [MB] (23 MBps) Copying: 762/1024 [MB] (22 MBps) Copying: 787/1024 [MB] (25 MBps) Copying: 810/1024 [MB] (23 MBps) Copying: 833/1024 [MB] (23 MBps) Copying: 857/1024 [MB] (23 MBps) Copying: 880/1024 [MB] (23 MBps) Copying: 903/1024 [MB] (23 MBps) Copying: 926/1024 [MB] (23 MBps) Copying: 950/1024 [MB] (23 MBps) Copying: 973/1024 [MB] (23 MBps) Copying: 996/1024 [MB] (23 MBps) Copying: 1020/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-11 15:35:38.374991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.917 [2024-07-11 15:35:38.375089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:24.917 [2024-07-11 15:35:38.375111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:24.917 [2024-07-11 15:35:38.375124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.917 [2024-07-11 15:35:38.375173] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:24.917 [2024-07-11 15:35:38.379072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.917 [2024-07-11 15:35:38.379113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:24.917 [2024-07-11 15:35:38.379130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.875 ms 00:27:24.917 [2024-07-11 15:35:38.379141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.917 [2024-07-11 15:35:38.379403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.917 [2024-07-11 15:35:38.379421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:24.917 [2024-07-11 15:35:38.379434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:27:24.917 [2024-07-11 15:35:38.379446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.917 [2024-07-11 15:35:38.383658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.917 [2024-07-11 15:35:38.383707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:24.917 [2024-07-11 15:35:38.383721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.193 ms 00:27:24.917 [2024-07-11 15:35:38.383732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.917 [2024-07-11 15:35:38.391394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.917 [2024-07-11 15:35:38.391433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:24.917 [2024-07-11 15:35:38.391472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.639 ms 00:27:24.917 [2024-07-11 15:35:38.391483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.917 [2024-07-11 15:35:38.420974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.917 [2024-07-11 15:35:38.421014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:24.917 [2024-07-11 15:35:38.421074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.388 ms 00:27:24.917 [2024-07-11 15:35:38.421085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.917 [2024-07-11 15:35:38.437176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.917 [2024-07-11 15:35:38.437230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:24.917 [2024-07-11 15:35:38.437262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.040 ms 00:27:24.917 [2024-07-11 15:35:38.437273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.917 [2024-07-11 15:35:38.441360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.917 [2024-07-11 15:35:38.441404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:24.917 [2024-07-11 15:35:38.441437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.043 ms 00:27:24.917 [2024-07-11 15:35:38.441471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.917 [2024-07-11 15:35:38.469344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.917 [2024-07-11 15:35:38.469382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:24.917 [2024-07-11 15:35:38.469415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.851 ms 00:27:24.917 [2024-07-11 15:35:38.469439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.917 [2024-07-11 15:35:38.497081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.917 [2024-07-11 15:35:38.497117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:24.917 [2024-07-11 15:35:38.497148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.602 ms 00:27:24.917 [2024-07-11 15:35:38.497158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.917 [2024-07-11 15:35:38.524662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.917 [2024-07-11 15:35:38.524699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:24.917 [2024-07-11 15:35:38.524744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.465 ms 00:27:24.917 [2024-07-11 15:35:38.524754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.177 [2024-07-11 15:35:38.552928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.177 [2024-07-11 15:35:38.552966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:25.177 [2024-07-11 15:35:38.552997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.095 ms 00:27:25.177 [2024-07-11 15:35:38.553007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.177 [2024-07-11 15:35:38.553055] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:25.177 [2024-07-11 15:35:38.553076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:25.177 [2024-07-11 15:35:38.553088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:27:25.177 [2024-07-11 15:35:38.553099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:25.177 [2024-07-11 15:35:38.553679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.553996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.554015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.554070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.554098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.554108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.554119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.554130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.554140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.554151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.554162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.554174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:25.178 [2024-07-11 15:35:38.554193] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:25.178 [2024-07-11 15:35:38.554204] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: be74e12b-85f1-4690-bff6-741dff03bc7b 00:27:25.178 [2024-07-11 15:35:38.554215] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:27:25.178 [2024-07-11 15:35:38.554225] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:25.178 [2024-07-11 15:35:38.554242] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:25.178 [2024-07-11 15:35:38.554252] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:25.178 [2024-07-11 15:35:38.554262] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:25.178 [2024-07-11 15:35:38.554273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:25.178 [2024-07-11 15:35:38.554283] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:25.178 [2024-07-11 15:35:38.554292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:25.178 [2024-07-11 15:35:38.554301] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:25.178 [2024-07-11 15:35:38.554316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.178 [2024-07-11 15:35:38.554327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:25.178 [2024-07-11 15:35:38.554339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.264 ms 00:27:25.178 [2024-07-11 15:35:38.554364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.178 [2024-07-11 15:35:38.569326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.178 [2024-07-11 15:35:38.569362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:25.178 [2024-07-11 15:35:38.569405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.917 ms 00:27:25.178 [2024-07-11 15:35:38.569415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.178 [2024-07-11 15:35:38.569842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.178 [2024-07-11 15:35:38.569872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:25.178 [2024-07-11 15:35:38.569886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:27:25.178 [2024-07-11 15:35:38.569896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.178 [2024-07-11 15:35:38.602134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.178 [2024-07-11 15:35:38.602180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:25.178 [2024-07-11 15:35:38.602212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.178 [2024-07-11 15:35:38.602223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.178 [2024-07-11 15:35:38.602284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.178 [2024-07-11 15:35:38.602299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:25.178 [2024-07-11 15:35:38.602310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.178 [2024-07-11 15:35:38.602321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.178 [2024-07-11 15:35:38.602417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.178 [2024-07-11 15:35:38.602465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:25.178 [2024-07-11 15:35:38.602476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.178 [2024-07-11 15:35:38.602485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.178 [2024-07-11 15:35:38.602520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.178 [2024-07-11 15:35:38.602532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:25.178 [2024-07-11 15:35:38.602542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.178 [2024-07-11 15:35:38.602552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.178 [2024-07-11 15:35:38.688677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.178 [2024-07-11 15:35:38.688735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:25.178 [2024-07-11 15:35:38.688779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.178 [2024-07-11 15:35:38.688789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.178 [2024-07-11 15:35:38.762859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.178 [2024-07-11 15:35:38.762918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:25.178 [2024-07-11 15:35:38.762951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.178 [2024-07-11 15:35:38.762961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.178 [2024-07-11 15:35:38.763031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.178 [2024-07-11 15:35:38.763090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:25.178 [2024-07-11 15:35:38.763102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.178 [2024-07-11 15:35:38.763112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.179 [2024-07-11 15:35:38.763155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.179 [2024-07-11 15:35:38.763169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:25.179 [2024-07-11 15:35:38.763179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.179 [2024-07-11 15:35:38.763189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.179 [2024-07-11 15:35:38.763311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.179 [2024-07-11 15:35:38.763350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:25.179 [2024-07-11 15:35:38.763362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.179 [2024-07-11 15:35:38.763372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.179 [2024-07-11 15:35:38.763451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.179 [2024-07-11 15:35:38.763484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:25.179 [2024-07-11 15:35:38.763496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.179 [2024-07-11 15:35:38.763508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.179 [2024-07-11 15:35:38.763553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.179 [2024-07-11 15:35:38.763568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:25.179 [2024-07-11 15:35:38.763587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.179 [2024-07-11 15:35:38.763599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.179 [2024-07-11 15:35:38.763672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.179 [2024-07-11 15:35:38.763690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:25.179 [2024-07-11 15:35:38.763702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.179 [2024-07-11 15:35:38.763713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.179 [2024-07-11 15:35:38.763865] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 388.865 ms, result 0 00:27:26.117 00:27:26.117 00:27:26.117 15:35:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:28.653 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:28.653 15:35:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:28.653 15:35:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:28.653 15:35:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:28.653 15:35:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:28.653 15:35:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:28.653 15:35:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:28.653 15:35:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:28.653 15:35:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 82970 00:27:28.653 15:35:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@948 -- # '[' -z 82970 ']' 00:27:28.653 15:35:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # kill -0 82970 00:27:28.653 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (82970) - No such process 00:27:28.653 Process with pid 82970 is not found 00:27:28.653 15:35:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@975 -- # echo 'Process with pid 82970 is not found' 00:27:28.653 15:35:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:28.912 Remove shared memory files 00:27:28.912 15:35:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:28.912 15:35:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:28.912 15:35:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:28.912 15:35:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:28.912 15:35:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:28.912 15:35:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:28.912 15:35:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:28.912 ************************************ 00:27:28.912 END TEST ftl_dirty_shutdown 00:27:28.912 ************************************ 00:27:28.912 00:27:28.912 real 3m56.485s 00:27:28.912 user 4m35.187s 00:27:28.912 sys 0m35.725s 00:27:28.912 15:35:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:28.912 15:35:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:28.912 15:35:42 ftl -- common/autotest_common.sh@1142 -- # return 0 00:27:28.912 15:35:42 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:28.912 15:35:42 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:27:28.912 15:35:42 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:28.912 15:35:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:28.912 ************************************ 00:27:28.912 START TEST ftl_upgrade_shutdown 00:27:28.912 ************************************ 00:27:28.912 15:35:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:28.912 * Looking for test storage... 00:27:28.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:28.912 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:28.912 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:28.912 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:28.912 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:28.912 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:28.912 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:28.912 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:28.912 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:28.912 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:28.912 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:28.912 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:28.912 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:28.912 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:28.912 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:28.912 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:28.912 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:28.912 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:28.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85426 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85426 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 85426 ']' 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:28.913 15:35:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:29.172 [2024-07-11 15:35:42.637295] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:29.172 [2024-07-11 15:35:42.638265] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85426 ] 00:27:29.431 [2024-07-11 15:35:42.825816] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.431 [2024-07-11 15:35:43.039323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:30.365 15:35:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:30.622 15:35:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:30.622 15:35:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:30.622 15:35:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:30.622 15:35:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:27:30.622 15:35:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:30.622 15:35:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:30.622 15:35:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:30.623 15:35:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:30.881 15:35:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:30.881 { 00:27:30.881 "name": "basen1", 00:27:30.881 "aliases": [ 00:27:30.881 "c3fba854-f1cb-490f-b326-bb7538b75001" 00:27:30.881 ], 00:27:30.881 "product_name": "NVMe disk", 00:27:30.881 "block_size": 4096, 00:27:30.881 "num_blocks": 1310720, 00:27:30.881 "uuid": "c3fba854-f1cb-490f-b326-bb7538b75001", 00:27:30.881 "assigned_rate_limits": { 00:27:30.881 "rw_ios_per_sec": 0, 00:27:30.881 "rw_mbytes_per_sec": 0, 00:27:30.881 "r_mbytes_per_sec": 0, 00:27:30.881 "w_mbytes_per_sec": 0 00:27:30.881 }, 00:27:30.881 "claimed": true, 00:27:30.881 "claim_type": "read_many_write_one", 00:27:30.881 "zoned": false, 00:27:30.881 "supported_io_types": { 00:27:30.881 "read": true, 00:27:30.881 "write": true, 00:27:30.881 "unmap": true, 00:27:30.881 "flush": true, 00:27:30.881 "reset": true, 00:27:30.881 "nvme_admin": true, 00:27:30.881 "nvme_io": true, 00:27:30.881 "nvme_io_md": false, 00:27:30.881 "write_zeroes": true, 00:27:30.881 "zcopy": false, 00:27:30.881 "get_zone_info": false, 00:27:30.881 "zone_management": false, 00:27:30.881 "zone_append": false, 00:27:30.881 "compare": true, 00:27:30.881 "compare_and_write": false, 00:27:30.881 "abort": true, 00:27:30.881 "seek_hole": false, 00:27:30.881 "seek_data": false, 00:27:30.881 "copy": true, 00:27:30.881 "nvme_iov_md": false 00:27:30.881 }, 00:27:30.881 "driver_specific": { 00:27:30.881 "nvme": [ 00:27:30.881 { 00:27:30.881 "pci_address": "0000:00:11.0", 00:27:30.881 "trid": { 00:27:30.881 "trtype": "PCIe", 00:27:30.881 "traddr": "0000:00:11.0" 00:27:30.881 }, 00:27:30.881 "ctrlr_data": { 00:27:30.881 "cntlid": 0, 00:27:30.881 "vendor_id": "0x1b36", 00:27:30.881 "model_number": "QEMU NVMe Ctrl", 00:27:30.881 "serial_number": "12341", 00:27:30.881 "firmware_revision": "8.0.0", 00:27:30.881 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:30.881 "oacs": { 00:27:30.881 "security": 0, 00:27:30.881 "format": 1, 00:27:30.881 "firmware": 0, 00:27:30.881 "ns_manage": 1 00:27:30.881 }, 00:27:30.881 "multi_ctrlr": false, 00:27:30.881 "ana_reporting": false 00:27:30.881 }, 00:27:30.881 "vs": { 00:27:30.881 "nvme_version": "1.4" 00:27:30.881 }, 00:27:30.881 "ns_data": { 00:27:30.881 "id": 1, 00:27:30.881 "can_share": false 00:27:30.881 } 00:27:30.881 } 00:27:30.881 ], 00:27:30.881 "mp_policy": "active_passive" 00:27:30.881 } 00:27:30.881 } 00:27:30.881 ]' 00:27:30.881 15:35:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:30.881 15:35:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:30.881 15:35:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:30.881 15:35:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:30.881 15:35:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:30.881 15:35:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:27:30.881 15:35:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:30.881 15:35:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:30.881 15:35:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:30.881 15:35:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:30.881 15:35:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:31.139 15:35:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=a57df313-a1fd-457c-9ca1-8a108a6a3ac4 00:27:31.139 15:35:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:31.139 15:35:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a57df313-a1fd-457c-9ca1-8a108a6a3ac4 00:27:31.397 15:35:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:31.655 15:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=830a70ac-ca68-48d0-bdbb-5198cffd462c 00:27:31.655 15:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 830a70ac-ca68-48d0-bdbb-5198cffd462c 00:27:31.913 15:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=1bd85139-7f06-4d47-822e-b8d1c20873cf 00:27:31.913 15:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 1bd85139-7f06-4d47-822e-b8d1c20873cf ]] 00:27:31.913 15:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 1bd85139-7f06-4d47-822e-b8d1c20873cf 5120 00:27:31.913 15:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:31.913 15:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:31.913 15:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=1bd85139-7f06-4d47-822e-b8d1c20873cf 00:27:31.913 15:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:31.913 15:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 1bd85139-7f06-4d47-822e-b8d1c20873cf 00:27:31.913 15:35:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=1bd85139-7f06-4d47-822e-b8d1c20873cf 00:27:31.913 15:35:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:31.913 15:35:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:31.913 15:35:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:31.913 15:35:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1bd85139-7f06-4d47-822e-b8d1c20873cf 00:27:32.172 15:35:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:32.172 { 00:27:32.172 "name": "1bd85139-7f06-4d47-822e-b8d1c20873cf", 00:27:32.172 "aliases": [ 00:27:32.172 "lvs/basen1p0" 00:27:32.172 ], 00:27:32.172 "product_name": "Logical Volume", 00:27:32.172 "block_size": 4096, 00:27:32.172 "num_blocks": 5242880, 00:27:32.172 "uuid": "1bd85139-7f06-4d47-822e-b8d1c20873cf", 00:27:32.172 "assigned_rate_limits": { 00:27:32.172 "rw_ios_per_sec": 0, 00:27:32.172 "rw_mbytes_per_sec": 0, 00:27:32.172 "r_mbytes_per_sec": 0, 00:27:32.172 "w_mbytes_per_sec": 0 00:27:32.172 }, 00:27:32.172 "claimed": false, 00:27:32.172 "zoned": false, 00:27:32.172 "supported_io_types": { 00:27:32.172 "read": true, 00:27:32.172 "write": true, 00:27:32.172 "unmap": true, 00:27:32.172 "flush": false, 00:27:32.172 "reset": true, 00:27:32.172 "nvme_admin": false, 00:27:32.172 "nvme_io": false, 00:27:32.172 "nvme_io_md": false, 00:27:32.172 "write_zeroes": true, 00:27:32.172 "zcopy": false, 00:27:32.172 "get_zone_info": false, 00:27:32.172 "zone_management": false, 00:27:32.172 "zone_append": false, 00:27:32.172 "compare": false, 00:27:32.172 "compare_and_write": false, 00:27:32.172 "abort": false, 00:27:32.172 "seek_hole": true, 00:27:32.172 "seek_data": true, 00:27:32.172 "copy": false, 00:27:32.172 "nvme_iov_md": false 00:27:32.172 }, 00:27:32.172 "driver_specific": { 00:27:32.172 "lvol": { 00:27:32.172 "lvol_store_uuid": "830a70ac-ca68-48d0-bdbb-5198cffd462c", 00:27:32.172 "base_bdev": "basen1", 00:27:32.172 "thin_provision": true, 00:27:32.172 "num_allocated_clusters": 0, 00:27:32.172 "snapshot": false, 00:27:32.172 "clone": false, 00:27:32.172 "esnap_clone": false 00:27:32.172 } 00:27:32.172 } 00:27:32.172 } 00:27:32.172 ]' 00:27:32.172 15:35:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:32.172 15:35:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:32.172 15:35:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:32.172 15:35:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:27:32.172 15:35:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:27:32.172 15:35:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:27:32.172 15:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:32.172 15:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:32.172 15:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:32.430 15:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:32.430 15:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:32.430 15:35:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:32.688 15:35:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:32.688 15:35:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:32.688 15:35:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 1bd85139-7f06-4d47-822e-b8d1c20873cf -c cachen1p0 --l2p_dram_limit 2 00:27:32.948 [2024-07-11 15:35:46.453791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.948 [2024-07-11 15:35:46.453868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:32.948 [2024-07-11 15:35:46.453888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:32.948 [2024-07-11 15:35:46.453901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.948 [2024-07-11 15:35:46.453974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.948 [2024-07-11 15:35:46.453999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:32.948 [2024-07-11 15:35:46.454053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:27:32.948 [2024-07-11 15:35:46.454086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.948 [2024-07-11 15:35:46.454118] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:32.948 [2024-07-11 15:35:46.455230] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:32.948 [2024-07-11 15:35:46.455263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.948 [2024-07-11 15:35:46.455281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:32.948 [2024-07-11 15:35:46.455308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.153 ms 00:27:32.948 [2024-07-11 15:35:46.455320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.948 [2024-07-11 15:35:46.455447] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID b7000be0-aea9-4b0d-973e-ac7c5680efe3 00:27:32.948 [2024-07-11 15:35:46.456413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.948 [2024-07-11 15:35:46.456459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:32.948 [2024-07-11 15:35:46.456494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:32.948 [2024-07-11 15:35:46.456506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.948 [2024-07-11 15:35:46.460759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.948 [2024-07-11 15:35:46.460801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:32.948 [2024-07-11 15:35:46.460839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.199 ms 00:27:32.948 [2024-07-11 15:35:46.460850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.948 [2024-07-11 15:35:46.460924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.948 [2024-07-11 15:35:46.460942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:32.948 [2024-07-11 15:35:46.460955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:27:32.948 [2024-07-11 15:35:46.460965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.948 [2024-07-11 15:35:46.461032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.948 [2024-07-11 15:35:46.461247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:32.948 [2024-07-11 15:35:46.461318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:32.948 [2024-07-11 15:35:46.461363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.948 [2024-07-11 15:35:46.461434] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:32.948 [2024-07-11 15:35:46.465849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.948 [2024-07-11 15:35:46.465907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:32.948 [2024-07-11 15:35:46.465923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.429 ms 00:27:32.948 [2024-07-11 15:35:46.465934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.948 [2024-07-11 15:35:46.465969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.948 [2024-07-11 15:35:46.465986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:32.948 [2024-07-11 15:35:46.465997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:32.948 [2024-07-11 15:35:46.466052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.948 [2024-07-11 15:35:46.466152] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:32.948 [2024-07-11 15:35:46.466335] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:32.948 [2024-07-11 15:35:46.466368] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:32.948 [2024-07-11 15:35:46.466388] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:27:32.948 [2024-07-11 15:35:46.466403] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:32.948 [2024-07-11 15:35:46.466434] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:32.948 [2024-07-11 15:35:46.466446] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:32.948 [2024-07-11 15:35:46.466459] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:32.948 [2024-07-11 15:35:46.466474] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:32.948 [2024-07-11 15:35:46.466502] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:32.948 [2024-07-11 15:35:46.466513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.948 [2024-07-11 15:35:46.466541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:32.948 [2024-07-11 15:35:46.466552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.363 ms 00:27:32.948 [2024-07-11 15:35:46.466564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.948 [2024-07-11 15:35:46.466651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.948 [2024-07-11 15:35:46.466669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:32.948 [2024-07-11 15:35:46.466681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:27:32.948 [2024-07-11 15:35:46.466693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.948 [2024-07-11 15:35:46.466797] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:32.948 [2024-07-11 15:35:46.466819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:32.948 [2024-07-11 15:35:46.466831] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:32.948 [2024-07-11 15:35:46.466844] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:32.948 [2024-07-11 15:35:46.466855] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:32.948 [2024-07-11 15:35:46.466867] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:32.948 [2024-07-11 15:35:46.466889] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:32.948 [2024-07-11 15:35:46.466902] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:32.948 [2024-07-11 15:35:46.466912] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:32.948 [2024-07-11 15:35:46.466923] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:32.948 [2024-07-11 15:35:46.466933] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:32.948 [2024-07-11 15:35:46.466947] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:32.948 [2024-07-11 15:35:46.466956] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:32.948 [2024-07-11 15:35:46.466968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:32.948 [2024-07-11 15:35:46.466978] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:32.948 [2024-07-11 15:35:46.466990] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:32.948 [2024-07-11 15:35:46.467000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:32.948 [2024-07-11 15:35:46.467014] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:32.948 [2024-07-11 15:35:46.467023] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:32.948 [2024-07-11 15:35:46.467035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:32.948 [2024-07-11 15:35:46.467045] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:32.948 [2024-07-11 15:35:46.467056] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:32.948 [2024-07-11 15:35:46.467066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:32.948 [2024-07-11 15:35:46.467077] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:32.948 [2024-07-11 15:35:46.467087] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:32.948 [2024-07-11 15:35:46.467099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:32.948 [2024-07-11 15:35:46.467123] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:32.948 [2024-07-11 15:35:46.467138] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:32.949 [2024-07-11 15:35:46.467148] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:32.949 [2024-07-11 15:35:46.467160] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:32.949 [2024-07-11 15:35:46.467169] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:32.949 [2024-07-11 15:35:46.467181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:32.949 [2024-07-11 15:35:46.467191] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:32.949 [2024-07-11 15:35:46.467220] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:32.949 [2024-07-11 15:35:46.467230] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:32.949 [2024-07-11 15:35:46.467242] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:32.949 [2024-07-11 15:35:46.467252] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:32.949 [2024-07-11 15:35:46.467265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:32.949 [2024-07-11 15:35:46.467275] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:32.949 [2024-07-11 15:35:46.467287] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:32.949 [2024-07-11 15:35:46.467297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:32.949 [2024-07-11 15:35:46.467309] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:32.949 [2024-07-11 15:35:46.467319] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:32.949 [2024-07-11 15:35:46.467330] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:32.949 [2024-07-11 15:35:46.467341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:32.949 [2024-07-11 15:35:46.467354] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:32.949 [2024-07-11 15:35:46.467366] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:32.949 [2024-07-11 15:35:46.467379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:32.949 [2024-07-11 15:35:46.467390] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:32.949 [2024-07-11 15:35:46.467404] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:32.949 [2024-07-11 15:35:46.467414] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:32.949 [2024-07-11 15:35:46.467426] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:32.949 [2024-07-11 15:35:46.467436] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:32.949 [2024-07-11 15:35:46.467452] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:32.949 [2024-07-11 15:35:46.467466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:32.949 [2024-07-11 15:35:46.467483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:32.949 [2024-07-11 15:35:46.467494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:32.949 [2024-07-11 15:35:46.467507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:32.949 [2024-07-11 15:35:46.467518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:32.949 [2024-07-11 15:35:46.467531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:32.949 [2024-07-11 15:35:46.467542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:32.949 [2024-07-11 15:35:46.467571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:32.949 [2024-07-11 15:35:46.467582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:32.949 [2024-07-11 15:35:46.467594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:32.949 [2024-07-11 15:35:46.467605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:32.949 [2024-07-11 15:35:46.467620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:32.949 [2024-07-11 15:35:46.467631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:32.949 [2024-07-11 15:35:46.467643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:32.949 [2024-07-11 15:35:46.467654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:32.949 [2024-07-11 15:35:46.467667] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:32.949 [2024-07-11 15:35:46.467678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:32.949 [2024-07-11 15:35:46.467692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:32.949 [2024-07-11 15:35:46.467703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:32.949 [2024-07-11 15:35:46.467715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:32.949 [2024-07-11 15:35:46.467726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:32.949 [2024-07-11 15:35:46.467739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.949 [2024-07-11 15:35:46.467750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:32.949 [2024-07-11 15:35:46.467763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.002 ms 00:27:32.949 [2024-07-11 15:35:46.467777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.949 [2024-07-11 15:35:46.467833] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:32.949 [2024-07-11 15:35:46.467850] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:35.506 [2024-07-11 15:35:48.519183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.506 [2024-07-11 15:35:48.519274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:35.506 [2024-07-11 15:35:48.519314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2051.361 ms 00:27:35.506 [2024-07-11 15:35:48.519326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.506 [2024-07-11 15:35:48.549862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.506 [2024-07-11 15:35:48.549918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:35.506 [2024-07-11 15:35:48.549957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.292 ms 00:27:35.506 [2024-07-11 15:35:48.549968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.506 [2024-07-11 15:35:48.550138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.506 [2024-07-11 15:35:48.550162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:35.506 [2024-07-11 15:35:48.550178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:35.506 [2024-07-11 15:35:48.550193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.506 [2024-07-11 15:35:48.585372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.506 [2024-07-11 15:35:48.585439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:35.506 [2024-07-11 15:35:48.585492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.121 ms 00:27:35.506 [2024-07-11 15:35:48.585503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.506 [2024-07-11 15:35:48.585564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.506 [2024-07-11 15:35:48.585582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:35.506 [2024-07-11 15:35:48.585596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:35.506 [2024-07-11 15:35:48.585606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.506 [2024-07-11 15:35:48.586085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.506 [2024-07-11 15:35:48.586106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:35.506 [2024-07-11 15:35:48.586122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.384 ms 00:27:35.506 [2024-07-11 15:35:48.586134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.506 [2024-07-11 15:35:48.586205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.506 [2024-07-11 15:35:48.586231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:35.506 [2024-07-11 15:35:48.586250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:27:35.506 [2024-07-11 15:35:48.586261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.506 [2024-07-11 15:35:48.604117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.506 [2024-07-11 15:35:48.604189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:35.506 [2024-07-11 15:35:48.604212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.825 ms 00:27:35.506 [2024-07-11 15:35:48.604224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.506 [2024-07-11 15:35:48.618610] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:35.506 [2024-07-11 15:35:48.619509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.506 [2024-07-11 15:35:48.619551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:35.506 [2024-07-11 15:35:48.619570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.169 ms 00:27:35.506 [2024-07-11 15:35:48.619585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.506 [2024-07-11 15:35:48.657713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.506 [2024-07-11 15:35:48.657795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:35.506 [2024-07-11 15:35:48.657817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.080 ms 00:27:35.506 [2024-07-11 15:35:48.657830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.506 [2024-07-11 15:35:48.657947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.506 [2024-07-11 15:35:48.657972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:35.506 [2024-07-11 15:35:48.657985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:27:35.506 [2024-07-11 15:35:48.657999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.506 [2024-07-11 15:35:48.687368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.506 [2024-07-11 15:35:48.687432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:35.506 [2024-07-11 15:35:48.687451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.239 ms 00:27:35.506 [2024-07-11 15:35:48.687464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.506 [2024-07-11 15:35:48.716672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.506 [2024-07-11 15:35:48.716736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:35.506 [2024-07-11 15:35:48.716753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.155 ms 00:27:35.506 [2024-07-11 15:35:48.716766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.506 [2024-07-11 15:35:48.717532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.506 [2024-07-11 15:35:48.717569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:35.506 [2024-07-11 15:35:48.717585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.719 ms 00:27:35.506 [2024-07-11 15:35:48.717601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.506 [2024-07-11 15:35:48.801776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.506 [2024-07-11 15:35:48.801857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:35.506 [2024-07-11 15:35:48.801878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 84.109 ms 00:27:35.506 [2024-07-11 15:35:48.801895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.506 [2024-07-11 15:35:48.832495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.506 [2024-07-11 15:35:48.832578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:35.506 [2024-07-11 15:35:48.832597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.550 ms 00:27:35.506 [2024-07-11 15:35:48.832611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.506 [2024-07-11 15:35:48.862591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.506 [2024-07-11 15:35:48.862792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:35.506 [2024-07-11 15:35:48.862927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.933 ms 00:27:35.506 [2024-07-11 15:35:48.862984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.506 [2024-07-11 15:35:48.893429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.506 [2024-07-11 15:35:48.893635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:35.506 [2024-07-11 15:35:48.893663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.355 ms 00:27:35.506 [2024-07-11 15:35:48.893678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.506 [2024-07-11 15:35:48.893737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.506 [2024-07-11 15:35:48.893758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:35.506 [2024-07-11 15:35:48.893771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:35.506 [2024-07-11 15:35:48.893787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.506 [2024-07-11 15:35:48.893898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.506 [2024-07-11 15:35:48.893922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:35.506 [2024-07-11 15:35:48.893937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:35.506 [2024-07-11 15:35:48.893951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.506 [2024-07-11 15:35:48.895275] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2440.845 ms, result 0 00:27:35.506 { 00:27:35.506 "name": "ftl", 00:27:35.506 "uuid": "b7000be0-aea9-4b0d-973e-ac7c5680efe3" 00:27:35.506 } 00:27:35.506 15:35:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:35.831 [2024-07-11 15:35:49.178371] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:35.831 15:35:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:36.096 15:35:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:36.096 [2024-07-11 15:35:49.642963] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:36.096 15:35:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:36.354 [2024-07-11 15:35:49.904514] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:36.354 15:35:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:36.921 Fill FTL, iteration 1 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=85536 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 85536 /var/tmp/spdk.tgt.sock 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 85536 ']' 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:36.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:36.921 15:35:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:36.921 [2024-07-11 15:35:50.371091] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:36.921 [2024-07-11 15:35:50.371447] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85536 ] 00:27:36.921 [2024-07-11 15:35:50.531001] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.180 [2024-07-11 15:35:50.708822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.116 15:35:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:38.116 15:35:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:27:38.116 15:35:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:38.116 ftln1 00:27:38.116 15:35:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:38.116 15:35:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:38.376 15:35:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:38.376 15:35:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 85536 00:27:38.376 15:35:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 85536 ']' 00:27:38.376 15:35:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 85536 00:27:38.376 15:35:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:27:38.376 15:35:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:38.376 15:35:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85536 00:27:38.376 killing process with pid 85536 00:27:38.376 15:35:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:38.376 15:35:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:38.376 15:35:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85536' 00:27:38.376 15:35:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 85536 00:27:38.376 15:35:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 85536 00:27:40.279 15:35:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:40.279 15:35:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:40.537 [2024-07-11 15:35:53.896545] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:40.537 [2024-07-11 15:35:53.896721] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85585 ] 00:27:40.537 [2024-07-11 15:35:54.071046] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.795 [2024-07-11 15:35:54.245594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.351  Copying: 211/1024 [MB] (211 MBps) Copying: 422/1024 [MB] (211 MBps) Copying: 634/1024 [MB] (212 MBps) Copying: 845/1024 [MB] (211 MBps) Copying: 1024/1024 [MB] (average 210 MBps) 00:27:47.351 00:27:47.351 Calculate MD5 checksum, iteration 1 00:27:47.351 15:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:47.351 15:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:47.351 15:36:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:47.351 15:36:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:47.351 15:36:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:47.351 15:36:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:47.351 15:36:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:47.351 15:36:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:47.351 [2024-07-11 15:36:00.643597] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:47.351 [2024-07-11 15:36:00.643764] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85657 ] 00:27:47.351 [2024-07-11 15:36:00.813750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.610 [2024-07-11 15:36:00.991490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:50.883  Copying: 469/1024 [MB] (469 MBps) Copying: 957/1024 [MB] (488 MBps) Copying: 1024/1024 [MB] (average 479 MBps) 00:27:50.883 00:27:50.883 15:36:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:50.883 15:36:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:53.435 15:36:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:53.435 15:36:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=d495be9b1096b33c6c4d4786b0d94fed 00:27:53.435 Fill FTL, iteration 2 00:27:53.435 15:36:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:53.435 15:36:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:53.435 15:36:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:53.435 15:36:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:53.435 15:36:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:53.435 15:36:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:53.435 15:36:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:53.435 15:36:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:53.435 15:36:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:53.435 [2024-07-11 15:36:06.720480] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:53.435 [2024-07-11 15:36:06.720651] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85718 ] 00:27:53.435 [2024-07-11 15:36:06.891112] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.693 [2024-07-11 15:36:07.074526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.196  Copying: 212/1024 [MB] (212 MBps) Copying: 426/1024 [MB] (214 MBps) Copying: 628/1024 [MB] (202 MBps) Copying: 842/1024 [MB] (214 MBps) Copying: 1024/1024 [MB] (average 209 MBps) 00:28:00.196 00:28:00.196 Calculate MD5 checksum, iteration 2 00:28:00.196 15:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:00.196 15:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:00.196 15:36:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:00.196 15:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:00.196 15:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:00.196 15:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:00.196 15:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:00.196 15:36:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:00.196 [2024-07-11 15:36:13.530823] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:00.196 [2024-07-11 15:36:13.530990] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85788 ] 00:28:00.196 [2024-07-11 15:36:13.700993] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.454 [2024-07-11 15:36:13.866717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.560  Copying: 458/1024 [MB] (458 MBps) Copying: 945/1024 [MB] (487 MBps) Copying: 1024/1024 [MB] (average 474 MBps) 00:28:04.560 00:28:04.560 15:36:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:04.560 15:36:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:07.093 15:36:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:07.093 15:36:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ff5c5613ecd3f779fbfd40cc1c4613a5 00:28:07.093 15:36:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:07.093 15:36:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:07.093 15:36:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:07.093 [2024-07-11 15:36:20.498391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.093 [2024-07-11 15:36:20.498476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:07.093 [2024-07-11 15:36:20.498511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:07.093 [2024-07-11 15:36:20.498522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.093 [2024-07-11 15:36:20.498556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.093 [2024-07-11 15:36:20.498571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:07.094 [2024-07-11 15:36:20.498582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:07.094 [2024-07-11 15:36:20.498601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.094 [2024-07-11 15:36:20.498626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.094 [2024-07-11 15:36:20.498640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:07.094 [2024-07-11 15:36:20.498663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:07.094 [2024-07-11 15:36:20.498672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.094 [2024-07-11 15:36:20.498781] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.389 ms, result 0 00:28:07.094 true 00:28:07.094 15:36:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:07.352 { 00:28:07.352 "name": "ftl", 00:28:07.352 "properties": [ 00:28:07.352 { 00:28:07.352 "name": "superblock_version", 00:28:07.352 "value": 5, 00:28:07.352 "read-only": true 00:28:07.352 }, 00:28:07.352 { 00:28:07.352 "name": "base_device", 00:28:07.352 "bands": [ 00:28:07.352 { 00:28:07.352 "id": 0, 00:28:07.352 "state": "FREE", 00:28:07.352 "validity": 0.0 00:28:07.352 }, 00:28:07.352 { 00:28:07.352 "id": 1, 00:28:07.352 "state": "FREE", 00:28:07.352 "validity": 0.0 00:28:07.352 }, 00:28:07.352 { 00:28:07.353 "id": 2, 00:28:07.353 "state": "FREE", 00:28:07.353 "validity": 0.0 00:28:07.353 }, 00:28:07.353 { 00:28:07.353 "id": 3, 00:28:07.353 "state": "FREE", 00:28:07.353 "validity": 0.0 00:28:07.353 }, 00:28:07.353 { 00:28:07.353 "id": 4, 00:28:07.353 "state": "FREE", 00:28:07.353 "validity": 0.0 00:28:07.353 }, 00:28:07.353 { 00:28:07.353 "id": 5, 00:28:07.353 "state": "FREE", 00:28:07.353 "validity": 0.0 00:28:07.353 }, 00:28:07.353 { 00:28:07.353 "id": 6, 00:28:07.353 "state": "FREE", 00:28:07.353 "validity": 0.0 00:28:07.353 }, 00:28:07.353 { 00:28:07.353 "id": 7, 00:28:07.353 "state": "FREE", 00:28:07.353 "validity": 0.0 00:28:07.353 }, 00:28:07.353 { 00:28:07.353 "id": 8, 00:28:07.353 "state": "FREE", 00:28:07.353 "validity": 0.0 00:28:07.353 }, 00:28:07.353 { 00:28:07.353 "id": 9, 00:28:07.353 "state": "FREE", 00:28:07.353 "validity": 0.0 00:28:07.353 }, 00:28:07.353 { 00:28:07.353 "id": 10, 00:28:07.353 "state": "FREE", 00:28:07.353 "validity": 0.0 00:28:07.353 }, 00:28:07.353 { 00:28:07.353 "id": 11, 00:28:07.353 "state": "FREE", 00:28:07.353 "validity": 0.0 00:28:07.353 }, 00:28:07.353 { 00:28:07.353 "id": 12, 00:28:07.353 "state": "FREE", 00:28:07.353 "validity": 0.0 00:28:07.353 }, 00:28:07.353 { 00:28:07.353 "id": 13, 00:28:07.353 "state": "FREE", 00:28:07.353 "validity": 0.0 00:28:07.353 }, 00:28:07.353 { 00:28:07.353 "id": 14, 00:28:07.353 "state": "FREE", 00:28:07.353 "validity": 0.0 00:28:07.353 }, 00:28:07.353 { 00:28:07.353 "id": 15, 00:28:07.353 "state": "FREE", 00:28:07.353 "validity": 0.0 00:28:07.353 }, 00:28:07.353 { 00:28:07.353 "id": 16, 00:28:07.353 "state": "FREE", 00:28:07.353 "validity": 0.0 00:28:07.353 }, 00:28:07.353 { 00:28:07.353 "id": 17, 00:28:07.353 "state": "FREE", 00:28:07.353 "validity": 0.0 00:28:07.353 } 00:28:07.353 ], 00:28:07.353 "read-only": true 00:28:07.353 }, 00:28:07.353 { 00:28:07.353 "name": "cache_device", 00:28:07.353 "type": "bdev", 00:28:07.353 "chunks": [ 00:28:07.353 { 00:28:07.353 "id": 0, 00:28:07.353 "state": "INACTIVE", 00:28:07.353 "utilization": 0.0 00:28:07.353 }, 00:28:07.353 { 00:28:07.353 "id": 1, 00:28:07.353 "state": "CLOSED", 00:28:07.353 "utilization": 1.0 00:28:07.353 }, 00:28:07.353 { 00:28:07.353 "id": 2, 00:28:07.353 "state": "CLOSED", 00:28:07.353 "utilization": 1.0 00:28:07.353 }, 00:28:07.353 { 00:28:07.353 "id": 3, 00:28:07.353 "state": "OPEN", 00:28:07.353 "utilization": 0.001953125 00:28:07.353 }, 00:28:07.353 { 00:28:07.353 "id": 4, 00:28:07.353 "state": "OPEN", 00:28:07.353 "utilization": 0.0 00:28:07.353 } 00:28:07.353 ], 00:28:07.353 "read-only": true 00:28:07.353 }, 00:28:07.353 { 00:28:07.353 "name": "verbose_mode", 00:28:07.353 "value": true, 00:28:07.353 "unit": "", 00:28:07.353 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:07.353 }, 00:28:07.353 { 00:28:07.353 "name": "prep_upgrade_on_shutdown", 00:28:07.353 "value": false, 00:28:07.353 "unit": "", 00:28:07.353 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:07.353 } 00:28:07.353 ] 00:28:07.353 } 00:28:07.353 15:36:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:07.612 [2024-07-11 15:36:20.985876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.612 [2024-07-11 15:36:20.985935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:07.612 [2024-07-11 15:36:20.985970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:07.612 [2024-07-11 15:36:20.985980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.612 [2024-07-11 15:36:20.986062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.612 [2024-07-11 15:36:20.986082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:07.612 [2024-07-11 15:36:20.986095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:07.612 [2024-07-11 15:36:20.986123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.612 [2024-07-11 15:36:20.986153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.612 [2024-07-11 15:36:20.986168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:07.612 [2024-07-11 15:36:20.986180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:07.612 [2024-07-11 15:36:20.986190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.612 [2024-07-11 15:36:20.986268] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.374 ms, result 0 00:28:07.612 true 00:28:07.612 15:36:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:07.612 15:36:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:07.612 15:36:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:07.612 15:36:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:07.612 15:36:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:07.612 15:36:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:07.872 [2024-07-11 15:36:21.458481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.872 [2024-07-11 15:36:21.458552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:07.872 [2024-07-11 15:36:21.458588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:07.872 [2024-07-11 15:36:21.458598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.872 [2024-07-11 15:36:21.458631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.872 [2024-07-11 15:36:21.458647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:07.872 [2024-07-11 15:36:21.458657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:07.872 [2024-07-11 15:36:21.458668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.872 [2024-07-11 15:36:21.458693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.872 [2024-07-11 15:36:21.458706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:07.872 [2024-07-11 15:36:21.458716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:07.872 [2024-07-11 15:36:21.458726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.872 [2024-07-11 15:36:21.458796] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.301 ms, result 0 00:28:07.872 true 00:28:07.872 15:36:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:08.130 { 00:28:08.131 "name": "ftl", 00:28:08.131 "properties": [ 00:28:08.131 { 00:28:08.131 "name": "superblock_version", 00:28:08.131 "value": 5, 00:28:08.131 "read-only": true 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "name": "base_device", 00:28:08.131 "bands": [ 00:28:08.131 { 00:28:08.131 "id": 0, 00:28:08.131 "state": "FREE", 00:28:08.131 "validity": 0.0 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "id": 1, 00:28:08.131 "state": "FREE", 00:28:08.131 "validity": 0.0 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "id": 2, 00:28:08.131 "state": "FREE", 00:28:08.131 "validity": 0.0 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "id": 3, 00:28:08.131 "state": "FREE", 00:28:08.131 "validity": 0.0 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "id": 4, 00:28:08.131 "state": "FREE", 00:28:08.131 "validity": 0.0 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "id": 5, 00:28:08.131 "state": "FREE", 00:28:08.131 "validity": 0.0 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "id": 6, 00:28:08.131 "state": "FREE", 00:28:08.131 "validity": 0.0 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "id": 7, 00:28:08.131 "state": "FREE", 00:28:08.131 "validity": 0.0 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "id": 8, 00:28:08.131 "state": "FREE", 00:28:08.131 "validity": 0.0 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "id": 9, 00:28:08.131 "state": "FREE", 00:28:08.131 "validity": 0.0 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "id": 10, 00:28:08.131 "state": "FREE", 00:28:08.131 "validity": 0.0 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "id": 11, 00:28:08.131 "state": "FREE", 00:28:08.131 "validity": 0.0 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "id": 12, 00:28:08.131 "state": "FREE", 00:28:08.131 "validity": 0.0 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "id": 13, 00:28:08.131 "state": "FREE", 00:28:08.131 "validity": 0.0 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "id": 14, 00:28:08.131 "state": "FREE", 00:28:08.131 "validity": 0.0 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "id": 15, 00:28:08.131 "state": "FREE", 00:28:08.131 "validity": 0.0 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "id": 16, 00:28:08.131 "state": "FREE", 00:28:08.131 "validity": 0.0 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "id": 17, 00:28:08.131 "state": "FREE", 00:28:08.131 "validity": 0.0 00:28:08.131 } 00:28:08.131 ], 00:28:08.131 "read-only": true 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "name": "cache_device", 00:28:08.131 "type": "bdev", 00:28:08.131 "chunks": [ 00:28:08.131 { 00:28:08.131 "id": 0, 00:28:08.131 "state": "INACTIVE", 00:28:08.131 "utilization": 0.0 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "id": 1, 00:28:08.131 "state": "CLOSED", 00:28:08.131 "utilization": 1.0 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "id": 2, 00:28:08.131 "state": "CLOSED", 00:28:08.131 "utilization": 1.0 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "id": 3, 00:28:08.131 "state": "OPEN", 00:28:08.131 "utilization": 0.001953125 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "id": 4, 00:28:08.131 "state": "OPEN", 00:28:08.131 "utilization": 0.0 00:28:08.131 } 00:28:08.131 ], 00:28:08.131 "read-only": true 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "name": "verbose_mode", 00:28:08.131 "value": true, 00:28:08.131 "unit": "", 00:28:08.131 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:08.131 }, 00:28:08.131 { 00:28:08.131 "name": "prep_upgrade_on_shutdown", 00:28:08.131 "value": true, 00:28:08.131 "unit": "", 00:28:08.131 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:08.131 } 00:28:08.131 ] 00:28:08.131 } 00:28:08.131 15:36:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:08.131 15:36:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85426 ]] 00:28:08.131 15:36:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85426 00:28:08.131 15:36:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 85426 ']' 00:28:08.131 15:36:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 85426 00:28:08.131 15:36:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:28:08.131 15:36:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:08.131 15:36:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85426 00:28:08.131 killing process with pid 85426 00:28:08.131 15:36:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:08.131 15:36:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:08.131 15:36:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85426' 00:28:08.131 15:36:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 85426 00:28:08.131 15:36:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 85426 00:28:09.067 [2024-07-11 15:36:22.553645] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:09.067 [2024-07-11 15:36:22.568682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.067 [2024-07-11 15:36:22.568743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:09.068 [2024-07-11 15:36:22.568786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:09.068 [2024-07-11 15:36:22.568797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.068 [2024-07-11 15:36:22.568828] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:09.068 [2024-07-11 15:36:22.572097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.068 [2024-07-11 15:36:22.572131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:09.068 [2024-07-11 15:36:22.572162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.246 ms 00:28:09.068 [2024-07-11 15:36:22.572189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.039 [2024-07-11 15:36:31.380260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.039 [2024-07-11 15:36:31.380327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:19.039 [2024-07-11 15:36:31.380364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8808.076 ms 00:28:19.039 [2024-07-11 15:36:31.380375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.039 [2024-07-11 15:36:31.381665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.039 [2024-07-11 15:36:31.381694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:19.039 [2024-07-11 15:36:31.381716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.267 ms 00:28:19.039 [2024-07-11 15:36:31.381728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.039 [2024-07-11 15:36:31.383070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.039 [2024-07-11 15:36:31.383118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:19.039 [2024-07-11 15:36:31.383135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.299 ms 00:28:19.039 [2024-07-11 15:36:31.383146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.039 [2024-07-11 15:36:31.395739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.039 [2024-07-11 15:36:31.395954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:19.039 [2024-07-11 15:36:31.396128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.514 ms 00:28:19.039 [2024-07-11 15:36:31.396185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.039 [2024-07-11 15:36:31.404078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.039 [2024-07-11 15:36:31.404289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:19.039 [2024-07-11 15:36:31.404451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.814 ms 00:28:19.039 [2024-07-11 15:36:31.404513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.039 [2024-07-11 15:36:31.404681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.039 [2024-07-11 15:36:31.404756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:19.039 [2024-07-11 15:36:31.404799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:28:19.039 [2024-07-11 15:36:31.404919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.039 [2024-07-11 15:36:31.417203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.039 [2024-07-11 15:36:31.417404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:28:19.039 [2024-07-11 15:36:31.417558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.059 ms 00:28:19.039 [2024-07-11 15:36:31.417626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.039 [2024-07-11 15:36:31.429937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.039 [2024-07-11 15:36:31.430159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:28:19.039 [2024-07-11 15:36:31.430301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.239 ms 00:28:19.039 [2024-07-11 15:36:31.430357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.039 [2024-07-11 15:36:31.442590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.039 [2024-07-11 15:36:31.442793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:19.039 [2024-07-11 15:36:31.442940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.155 ms 00:28:19.039 [2024-07-11 15:36:31.442996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.039 [2024-07-11 15:36:31.454985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.039 [2024-07-11 15:36:31.455223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:19.039 [2024-07-11 15:36:31.455360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.865 ms 00:28:19.039 [2024-07-11 15:36:31.455418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.039 [2024-07-11 15:36:31.455700] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:19.039 [2024-07-11 15:36:31.455769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:19.039 [2024-07-11 15:36:31.455925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:19.039 [2024-07-11 15:36:31.455998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:19.039 [2024-07-11 15:36:31.456228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:19.039 [2024-07-11 15:36:31.456294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:19.039 [2024-07-11 15:36:31.456456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:19.039 [2024-07-11 15:36:31.456613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:19.039 [2024-07-11 15:36:31.456833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:19.039 [2024-07-11 15:36:31.456894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:19.039 [2024-07-11 15:36:31.457068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:19.039 [2024-07-11 15:36:31.457139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:19.039 [2024-07-11 15:36:31.457333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:19.039 [2024-07-11 15:36:31.457397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:19.039 [2024-07-11 15:36:31.457522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:19.039 [2024-07-11 15:36:31.457679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:19.039 [2024-07-11 15:36:31.457905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:19.039 [2024-07-11 15:36:31.457966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:19.039 [2024-07-11 15:36:31.458130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:19.039 [2024-07-11 15:36:31.458153] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:19.039 [2024-07-11 15:36:31.458165] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b7000be0-aea9-4b0d-973e-ac7c5680efe3 00:28:19.039 [2024-07-11 15:36:31.458177] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:19.039 [2024-07-11 15:36:31.458188] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:28:19.039 [2024-07-11 15:36:31.458198] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:28:19.039 [2024-07-11 15:36:31.458210] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:28:19.039 [2024-07-11 15:36:31.458221] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:19.039 [2024-07-11 15:36:31.458232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:19.039 [2024-07-11 15:36:31.458243] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:19.039 [2024-07-11 15:36:31.458253] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:19.039 [2024-07-11 15:36:31.458264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:19.039 [2024-07-11 15:36:31.458276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.039 [2024-07-11 15:36:31.458287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:19.039 [2024-07-11 15:36:31.458299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.578 ms 00:28:19.039 [2024-07-11 15:36:31.458318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.039 [2024-07-11 15:36:31.474867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.039 [2024-07-11 15:36:31.475075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:19.039 [2024-07-11 15:36:31.475226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.502 ms 00:28:19.039 [2024-07-11 15:36:31.475352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.039 [2024-07-11 15:36:31.475927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.039 [2024-07-11 15:36:31.476109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:19.039 [2024-07-11 15:36:31.476291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.408 ms 00:28:19.039 [2024-07-11 15:36:31.476347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.039 [2024-07-11 15:36:31.526221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.039 [2024-07-11 15:36:31.526461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:19.039 [2024-07-11 15:36:31.526506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.039 [2024-07-11 15:36:31.526518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.039 [2024-07-11 15:36:31.526573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.039 [2024-07-11 15:36:31.526589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:19.039 [2024-07-11 15:36:31.526608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.039 [2024-07-11 15:36:31.526618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.039 [2024-07-11 15:36:31.526727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.039 [2024-07-11 15:36:31.526747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:19.039 [2024-07-11 15:36:31.526758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.039 [2024-07-11 15:36:31.526768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.039 [2024-07-11 15:36:31.526791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.039 [2024-07-11 15:36:31.526812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:19.039 [2024-07-11 15:36:31.526823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.039 [2024-07-11 15:36:31.526836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.039 [2024-07-11 15:36:31.617455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.039 [2024-07-11 15:36:31.617555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:19.039 [2024-07-11 15:36:31.617591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.039 [2024-07-11 15:36:31.617602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.039 [2024-07-11 15:36:31.695716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.039 [2024-07-11 15:36:31.695778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:19.040 [2024-07-11 15:36:31.695837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.040 [2024-07-11 15:36:31.695848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.040 [2024-07-11 15:36:31.695945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.040 [2024-07-11 15:36:31.695962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:19.040 [2024-07-11 15:36:31.695973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.040 [2024-07-11 15:36:31.695983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.040 [2024-07-11 15:36:31.696037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.040 [2024-07-11 15:36:31.696093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:19.040 [2024-07-11 15:36:31.696125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.040 [2024-07-11 15:36:31.696136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.040 [2024-07-11 15:36:31.696280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.040 [2024-07-11 15:36:31.696301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:19.040 [2024-07-11 15:36:31.696314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.040 [2024-07-11 15:36:31.696325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.040 [2024-07-11 15:36:31.696377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.040 [2024-07-11 15:36:31.696396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:19.040 [2024-07-11 15:36:31.696410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.040 [2024-07-11 15:36:31.696420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.040 [2024-07-11 15:36:31.696476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.040 [2024-07-11 15:36:31.696529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:19.040 [2024-07-11 15:36:31.696562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.040 [2024-07-11 15:36:31.696579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.040 [2024-07-11 15:36:31.696650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:19.040 [2024-07-11 15:36:31.696670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:19.040 [2024-07-11 15:36:31.696681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:19.040 [2024-07-11 15:36:31.696695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.040 [2024-07-11 15:36:31.696862] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9128.216 ms, result 0 00:28:21.570 15:36:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:21.570 15:36:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:28:21.570 15:36:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:21.570 15:36:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:21.570 15:36:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:21.570 15:36:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86005 00:28:21.570 15:36:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:21.570 15:36:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86005 00:28:21.570 15:36:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:21.570 15:36:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86005 ']' 00:28:21.570 15:36:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.570 15:36:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:21.570 15:36:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.570 15:36:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:21.570 15:36:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:21.570 [2024-07-11 15:36:34.763339] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:21.570 [2024-07-11 15:36:34.763763] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86005 ] 00:28:21.570 [2024-07-11 15:36:34.926392] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.570 [2024-07-11 15:36:35.106827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.509 [2024-07-11 15:36:35.909413] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:22.509 [2024-07-11 15:36:35.909771] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:22.509 [2024-07-11 15:36:36.056892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.509 [2024-07-11 15:36:36.057181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:22.509 [2024-07-11 15:36:36.057336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:22.509 [2024-07-11 15:36:36.057494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.509 [2024-07-11 15:36:36.057639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.509 [2024-07-11 15:36:36.057662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:22.509 [2024-07-11 15:36:36.057675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:28:22.509 [2024-07-11 15:36:36.057685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.509 [2024-07-11 15:36:36.057723] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:22.509 [2024-07-11 15:36:36.058716] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:22.509 [2024-07-11 15:36:36.058750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.509 [2024-07-11 15:36:36.058761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:22.509 [2024-07-11 15:36:36.058773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.037 ms 00:28:22.509 [2024-07-11 15:36:36.058782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.509 [2024-07-11 15:36:36.060012] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:22.509 [2024-07-11 15:36:36.074774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.509 [2024-07-11 15:36:36.074816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:22.509 [2024-07-11 15:36:36.074832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.763 ms 00:28:22.509 [2024-07-11 15:36:36.074843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.509 [2024-07-11 15:36:36.074910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.509 [2024-07-11 15:36:36.074929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:22.509 [2024-07-11 15:36:36.074940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:22.509 [2024-07-11 15:36:36.074949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.509 [2024-07-11 15:36:36.079397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.509 [2024-07-11 15:36:36.079450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:22.509 [2024-07-11 15:36:36.079481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.306 ms 00:28:22.509 [2024-07-11 15:36:36.079491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.509 [2024-07-11 15:36:36.079577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.509 [2024-07-11 15:36:36.079594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:22.509 [2024-07-11 15:36:36.079605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:28:22.509 [2024-07-11 15:36:36.079618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.509 [2024-07-11 15:36:36.079675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.509 [2024-07-11 15:36:36.079691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:22.509 [2024-07-11 15:36:36.079702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:22.509 [2024-07-11 15:36:36.079713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.509 [2024-07-11 15:36:36.079744] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:22.509 [2024-07-11 15:36:36.083693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.509 [2024-07-11 15:36:36.083729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:22.509 [2024-07-11 15:36:36.083760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.956 ms 00:28:22.509 [2024-07-11 15:36:36.083770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.509 [2024-07-11 15:36:36.083806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.509 [2024-07-11 15:36:36.083821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:22.509 [2024-07-11 15:36:36.083832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:22.509 [2024-07-11 15:36:36.083847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.509 [2024-07-11 15:36:36.083892] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:22.509 [2024-07-11 15:36:36.083922] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:22.509 [2024-07-11 15:36:36.083975] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:22.509 [2024-07-11 15:36:36.083994] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:28:22.509 [2024-07-11 15:36:36.084166] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:22.509 [2024-07-11 15:36:36.084185] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:22.509 [2024-07-11 15:36:36.084216] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:28:22.509 [2024-07-11 15:36:36.084231] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:22.509 [2024-07-11 15:36:36.084244] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:22.509 [2024-07-11 15:36:36.084255] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:22.509 [2024-07-11 15:36:36.084265] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:22.509 [2024-07-11 15:36:36.084276] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:22.509 [2024-07-11 15:36:36.084286] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:22.509 [2024-07-11 15:36:36.084302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.509 [2024-07-11 15:36:36.084317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:22.509 [2024-07-11 15:36:36.084333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.413 ms 00:28:22.509 [2024-07-11 15:36:36.084345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.509 [2024-07-11 15:36:36.084512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.509 [2024-07-11 15:36:36.084531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:22.509 [2024-07-11 15:36:36.084542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.098 ms 00:28:22.509 [2024-07-11 15:36:36.084558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.509 [2024-07-11 15:36:36.084675] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:22.509 [2024-07-11 15:36:36.084715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:22.509 [2024-07-11 15:36:36.084731] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:22.509 [2024-07-11 15:36:36.084742] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:22.509 [2024-07-11 15:36:36.084753] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:22.509 [2024-07-11 15:36:36.084763] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:22.509 [2024-07-11 15:36:36.084787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:22.509 [2024-07-11 15:36:36.084797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:22.509 [2024-07-11 15:36:36.084807] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:22.509 [2024-07-11 15:36:36.084836] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:22.509 [2024-07-11 15:36:36.084855] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:22.509 [2024-07-11 15:36:36.084869] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:22.509 [2024-07-11 15:36:36.084879] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:22.509 [2024-07-11 15:36:36.084888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:22.509 [2024-07-11 15:36:36.084898] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:22.509 [2024-07-11 15:36:36.084907] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:22.509 [2024-07-11 15:36:36.084917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:22.509 [2024-07-11 15:36:36.084926] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:22.509 [2024-07-11 15:36:36.084935] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:22.509 [2024-07-11 15:36:36.084945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:22.509 [2024-07-11 15:36:36.084957] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:22.509 [2024-07-11 15:36:36.084975] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:22.509 [2024-07-11 15:36:36.084993] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:22.509 [2024-07-11 15:36:36.085009] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:22.509 [2024-07-11 15:36:36.085019] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:22.509 [2024-07-11 15:36:36.085028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:22.509 [2024-07-11 15:36:36.085038] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:22.509 [2024-07-11 15:36:36.085064] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:22.509 [2024-07-11 15:36:36.085076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:22.509 [2024-07-11 15:36:36.085086] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:22.509 [2024-07-11 15:36:36.085095] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:22.509 [2024-07-11 15:36:36.085105] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:22.509 [2024-07-11 15:36:36.085114] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:22.509 [2024-07-11 15:36:36.085124] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:22.510 [2024-07-11 15:36:36.085133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:22.510 [2024-07-11 15:36:36.085157] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:22.510 [2024-07-11 15:36:36.085166] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:22.510 [2024-07-11 15:36:36.085176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:22.510 [2024-07-11 15:36:36.085192] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:22.510 [2024-07-11 15:36:36.085211] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:22.510 [2024-07-11 15:36:36.085229] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:22.510 [2024-07-11 15:36:36.085244] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:22.510 [2024-07-11 15:36:36.085254] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:22.510 [2024-07-11 15:36:36.085262] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:22.510 [2024-07-11 15:36:36.085279] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:22.510 [2024-07-11 15:36:36.085289] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:22.510 [2024-07-11 15:36:36.085299] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:22.510 [2024-07-11 15:36:36.085309] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:22.510 [2024-07-11 15:36:36.085319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:22.510 [2024-07-11 15:36:36.085328] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:22.510 [2024-07-11 15:36:36.085337] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:22.510 [2024-07-11 15:36:36.085370] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:22.510 [2024-07-11 15:36:36.085390] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:22.510 [2024-07-11 15:36:36.085406] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:22.510 [2024-07-11 15:36:36.085419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:22.510 [2024-07-11 15:36:36.085432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:22.510 [2024-07-11 15:36:36.085442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:22.510 [2024-07-11 15:36:36.085453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:22.510 [2024-07-11 15:36:36.085463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:22.510 [2024-07-11 15:36:36.085474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:22.510 [2024-07-11 15:36:36.085484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:22.510 [2024-07-11 15:36:36.085494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:22.510 [2024-07-11 15:36:36.085505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:22.510 [2024-07-11 15:36:36.085515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:22.510 [2024-07-11 15:36:36.085525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:22.510 [2024-07-11 15:36:36.085535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:22.510 [2024-07-11 15:36:36.085548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:22.510 [2024-07-11 15:36:36.085567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:22.510 [2024-07-11 15:36:36.085588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:22.510 [2024-07-11 15:36:36.085601] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:22.510 [2024-07-11 15:36:36.085612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:22.510 [2024-07-11 15:36:36.085624] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:22.510 [2024-07-11 15:36:36.085643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:22.510 [2024-07-11 15:36:36.085661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:22.510 [2024-07-11 15:36:36.085672] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:22.510 [2024-07-11 15:36:36.085684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:22.510 [2024-07-11 15:36:36.085694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:22.510 [2024-07-11 15:36:36.085705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.076 ms 00:28:22.510 [2024-07-11 15:36:36.085722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:22.510 [2024-07-11 15:36:36.085813] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:22.510 [2024-07-11 15:36:36.085843] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:25.044 [2024-07-11 15:36:38.164484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.044 [2024-07-11 15:36:38.164566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:25.044 [2024-07-11 15:36:38.164604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2078.685 ms 00:28:25.044 [2024-07-11 15:36:38.164616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.044 [2024-07-11 15:36:38.194788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.044 [2024-07-11 15:36:38.194842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:25.044 [2024-07-11 15:36:38.194878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.926 ms 00:28:25.044 [2024-07-11 15:36:38.194894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.044 [2024-07-11 15:36:38.195019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.044 [2024-07-11 15:36:38.195064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:25.044 [2024-07-11 15:36:38.195081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:25.044 [2024-07-11 15:36:38.195108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.045 [2024-07-11 15:36:38.230707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.045 [2024-07-11 15:36:38.230765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:25.045 [2024-07-11 15:36:38.230801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.538 ms 00:28:25.045 [2024-07-11 15:36:38.230812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.045 [2024-07-11 15:36:38.230880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.045 [2024-07-11 15:36:38.230911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:25.045 [2024-07-11 15:36:38.230940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:25.045 [2024-07-11 15:36:38.230951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.045 [2024-07-11 15:36:38.231401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.045 [2024-07-11 15:36:38.231428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:25.045 [2024-07-11 15:36:38.231459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.376 ms 00:28:25.045 [2024-07-11 15:36:38.231478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.045 [2024-07-11 15:36:38.231545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.045 [2024-07-11 15:36:38.231561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:25.045 [2024-07-11 15:36:38.231605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:28:25.045 [2024-07-11 15:36:38.231615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.045 [2024-07-11 15:36:38.248069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.045 [2024-07-11 15:36:38.248114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:25.045 [2024-07-11 15:36:38.248148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.426 ms 00:28:25.045 [2024-07-11 15:36:38.248159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.045 [2024-07-11 15:36:38.263089] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:25.045 [2024-07-11 15:36:38.263134] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:25.045 [2024-07-11 15:36:38.263169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.045 [2024-07-11 15:36:38.263180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:25.045 [2024-07-11 15:36:38.263192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.858 ms 00:28:25.045 [2024-07-11 15:36:38.263202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.045 [2024-07-11 15:36:38.279526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.045 [2024-07-11 15:36:38.279569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:25.045 [2024-07-11 15:36:38.279603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.277 ms 00:28:25.045 [2024-07-11 15:36:38.279614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.045 [2024-07-11 15:36:38.293538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.045 [2024-07-11 15:36:38.293595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:25.045 [2024-07-11 15:36:38.293629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.892 ms 00:28:25.045 [2024-07-11 15:36:38.293639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.045 [2024-07-11 15:36:38.307613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.045 [2024-07-11 15:36:38.307653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:25.045 [2024-07-11 15:36:38.307685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.929 ms 00:28:25.045 [2024-07-11 15:36:38.307696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.045 [2024-07-11 15:36:38.308506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.045 [2024-07-11 15:36:38.308543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:25.045 [2024-07-11 15:36:38.308559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.695 ms 00:28:25.045 [2024-07-11 15:36:38.308570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.045 [2024-07-11 15:36:38.390929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.045 [2024-07-11 15:36:38.391002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:25.045 [2024-07-11 15:36:38.391054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 82.322 ms 00:28:25.045 [2024-07-11 15:36:38.391069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.045 [2024-07-11 15:36:38.404030] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:25.045 [2024-07-11 15:36:38.404792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.045 [2024-07-11 15:36:38.404830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:25.045 [2024-07-11 15:36:38.404847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.629 ms 00:28:25.045 [2024-07-11 15:36:38.404864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.045 [2024-07-11 15:36:38.404988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.045 [2024-07-11 15:36:38.405007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:25.045 [2024-07-11 15:36:38.405019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:28:25.045 [2024-07-11 15:36:38.405029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.045 [2024-07-11 15:36:38.405131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.045 [2024-07-11 15:36:38.405150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:25.045 [2024-07-11 15:36:38.405163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:25.045 [2024-07-11 15:36:38.405173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.045 [2024-07-11 15:36:38.405213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.045 [2024-07-11 15:36:38.405227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:25.045 [2024-07-11 15:36:38.405238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:25.045 [2024-07-11 15:36:38.405247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.045 [2024-07-11 15:36:38.405282] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:25.045 [2024-07-11 15:36:38.405297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.045 [2024-07-11 15:36:38.405308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:25.045 [2024-07-11 15:36:38.405319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:25.045 [2024-07-11 15:36:38.405328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.045 [2024-07-11 15:36:38.437102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.045 [2024-07-11 15:36:38.437146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:25.045 [2024-07-11 15:36:38.437181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.725 ms 00:28:25.045 [2024-07-11 15:36:38.437193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.045 [2024-07-11 15:36:38.437295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.045 [2024-07-11 15:36:38.437313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:25.045 [2024-07-11 15:36:38.437326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:28:25.045 [2024-07-11 15:36:38.437336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.045 [2024-07-11 15:36:38.438614] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2381.142 ms, result 0 00:28:25.045 [2024-07-11 15:36:38.453539] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.045 [2024-07-11 15:36:38.469549] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:25.045 [2024-07-11 15:36:38.478490] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:25.982 15:36:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:25.982 15:36:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:28:25.982 15:36:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:25.982 15:36:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:25.982 15:36:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:25.982 [2024-07-11 15:36:39.499774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.982 [2024-07-11 15:36:39.499837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:25.982 [2024-07-11 15:36:39.499859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:25.982 [2024-07-11 15:36:39.499872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.982 [2024-07-11 15:36:39.499908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.982 [2024-07-11 15:36:39.499931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:25.982 [2024-07-11 15:36:39.499944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:25.982 [2024-07-11 15:36:39.499955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.982 [2024-07-11 15:36:39.499984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:25.982 [2024-07-11 15:36:39.499998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:25.982 [2024-07-11 15:36:39.500011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:25.982 [2024-07-11 15:36:39.500047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:25.982 [2024-07-11 15:36:39.500135] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.344 ms, result 0 00:28:25.982 true 00:28:25.982 15:36:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:26.240 { 00:28:26.240 "name": "ftl", 00:28:26.240 "properties": [ 00:28:26.240 { 00:28:26.240 "name": "superblock_version", 00:28:26.240 "value": 5, 00:28:26.240 "read-only": true 00:28:26.240 }, 00:28:26.240 { 00:28:26.240 "name": "base_device", 00:28:26.240 "bands": [ 00:28:26.240 { 00:28:26.240 "id": 0, 00:28:26.240 "state": "CLOSED", 00:28:26.240 "validity": 1.0 00:28:26.240 }, 00:28:26.240 { 00:28:26.240 "id": 1, 00:28:26.240 "state": "CLOSED", 00:28:26.240 "validity": 1.0 00:28:26.240 }, 00:28:26.240 { 00:28:26.240 "id": 2, 00:28:26.240 "state": "CLOSED", 00:28:26.240 "validity": 0.007843137254901933 00:28:26.240 }, 00:28:26.240 { 00:28:26.240 "id": 3, 00:28:26.240 "state": "FREE", 00:28:26.240 "validity": 0.0 00:28:26.240 }, 00:28:26.240 { 00:28:26.240 "id": 4, 00:28:26.240 "state": "FREE", 00:28:26.240 "validity": 0.0 00:28:26.240 }, 00:28:26.240 { 00:28:26.240 "id": 5, 00:28:26.240 "state": "FREE", 00:28:26.240 "validity": 0.0 00:28:26.240 }, 00:28:26.240 { 00:28:26.240 "id": 6, 00:28:26.240 "state": "FREE", 00:28:26.240 "validity": 0.0 00:28:26.240 }, 00:28:26.240 { 00:28:26.240 "id": 7, 00:28:26.240 "state": "FREE", 00:28:26.240 "validity": 0.0 00:28:26.240 }, 00:28:26.240 { 00:28:26.240 "id": 8, 00:28:26.240 "state": "FREE", 00:28:26.240 "validity": 0.0 00:28:26.240 }, 00:28:26.240 { 00:28:26.240 "id": 9, 00:28:26.241 "state": "FREE", 00:28:26.241 "validity": 0.0 00:28:26.241 }, 00:28:26.241 { 00:28:26.241 "id": 10, 00:28:26.241 "state": "FREE", 00:28:26.241 "validity": 0.0 00:28:26.241 }, 00:28:26.241 { 00:28:26.241 "id": 11, 00:28:26.241 "state": "FREE", 00:28:26.241 "validity": 0.0 00:28:26.241 }, 00:28:26.241 { 00:28:26.241 "id": 12, 00:28:26.241 "state": "FREE", 00:28:26.241 "validity": 0.0 00:28:26.241 }, 00:28:26.241 { 00:28:26.241 "id": 13, 00:28:26.241 "state": "FREE", 00:28:26.241 "validity": 0.0 00:28:26.241 }, 00:28:26.241 { 00:28:26.241 "id": 14, 00:28:26.241 "state": "FREE", 00:28:26.241 "validity": 0.0 00:28:26.241 }, 00:28:26.241 { 00:28:26.241 "id": 15, 00:28:26.241 "state": "FREE", 00:28:26.241 "validity": 0.0 00:28:26.241 }, 00:28:26.241 { 00:28:26.241 "id": 16, 00:28:26.241 "state": "FREE", 00:28:26.241 "validity": 0.0 00:28:26.241 }, 00:28:26.241 { 00:28:26.241 "id": 17, 00:28:26.241 "state": "FREE", 00:28:26.241 "validity": 0.0 00:28:26.241 } 00:28:26.241 ], 00:28:26.241 "read-only": true 00:28:26.241 }, 00:28:26.241 { 00:28:26.241 "name": "cache_device", 00:28:26.241 "type": "bdev", 00:28:26.241 "chunks": [ 00:28:26.241 { 00:28:26.241 "id": 0, 00:28:26.241 "state": "INACTIVE", 00:28:26.241 "utilization": 0.0 00:28:26.241 }, 00:28:26.241 { 00:28:26.241 "id": 1, 00:28:26.241 "state": "OPEN", 00:28:26.241 "utilization": 0.0 00:28:26.241 }, 00:28:26.241 { 00:28:26.241 "id": 2, 00:28:26.241 "state": "OPEN", 00:28:26.241 "utilization": 0.0 00:28:26.241 }, 00:28:26.241 { 00:28:26.241 "id": 3, 00:28:26.241 "state": "FREE", 00:28:26.241 "utilization": 0.0 00:28:26.241 }, 00:28:26.241 { 00:28:26.241 "id": 4, 00:28:26.241 "state": "FREE", 00:28:26.241 "utilization": 0.0 00:28:26.241 } 00:28:26.241 ], 00:28:26.241 "read-only": true 00:28:26.241 }, 00:28:26.241 { 00:28:26.241 "name": "verbose_mode", 00:28:26.241 "value": true, 00:28:26.241 "unit": "", 00:28:26.241 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:26.241 }, 00:28:26.241 { 00:28:26.241 "name": "prep_upgrade_on_shutdown", 00:28:26.241 "value": false, 00:28:26.241 "unit": "", 00:28:26.241 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:26.241 } 00:28:26.241 ] 00:28:26.241 } 00:28:26.241 15:36:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:26.241 15:36:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:26.241 15:36:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:26.499 15:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:26.499 15:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:26.499 15:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:26.499 15:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:26.499 15:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:26.758 Validate MD5 checksum, iteration 1 00:28:26.758 15:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:26.758 15:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:26.758 15:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:26.758 15:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:26.758 15:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:26.758 15:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:26.758 15:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:26.758 15:36:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:26.758 15:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:26.758 15:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:26.758 15:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:26.758 15:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:26.758 15:36:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:27.016 [2024-07-11 15:36:40.394411] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:27.016 [2024-07-11 15:36:40.394765] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86074 ] 00:28:27.016 [2024-07-11 15:36:40.555457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.274 [2024-07-11 15:36:40.745058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.898  Copying: 463/1024 [MB] (463 MBps) Copying: 920/1024 [MB] (457 MBps) Copying: 1024/1024 [MB] (average 454 MBps) 00:28:31.898 00:28:31.898 15:36:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:31.898 15:36:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:33.841 15:36:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:33.841 Validate MD5 checksum, iteration 2 00:28:33.841 15:36:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d495be9b1096b33c6c4d4786b0d94fed 00:28:33.841 15:36:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d495be9b1096b33c6c4d4786b0d94fed != \d\4\9\5\b\e\9\b\1\0\9\6\b\3\3\c\6\c\4\d\4\7\8\6\b\0\d\9\4\f\e\d ]] 00:28:33.841 15:36:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:33.841 15:36:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:33.841 15:36:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:33.841 15:36:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:33.841 15:36:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:33.841 15:36:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:33.841 15:36:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:33.841 15:36:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:33.841 15:36:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:33.841 [2024-07-11 15:36:47.426494] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:33.841 [2024-07-11 15:36:47.426664] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86147 ] 00:28:34.101 [2024-07-11 15:36:47.599089] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.360 [2024-07-11 15:36:47.819233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.392  Copying: 490/1024 [MB] (490 MBps) Copying: 955/1024 [MB] (465 MBps) Copying: 1024/1024 [MB] (average 478 MBps) 00:28:40.393 00:28:40.393 15:36:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:40.393 15:36:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ff5c5613ecd3f779fbfd40cc1c4613a5 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ff5c5613ecd3f779fbfd40cc1c4613a5 != \f\f\5\c\5\6\1\3\e\c\d\3\f\7\7\9\f\b\f\d\4\0\c\c\1\c\4\6\1\3\a\5 ]] 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 86005 ]] 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 86005 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=86235 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 86235 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 86235 ']' 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:42.295 15:36:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:42.295 [2024-07-11 15:36:55.739189] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:42.295 [2024-07-11 15:36:55.739378] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86235 ] 00:28:42.295 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 86005 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:42.554 [2024-07-11 15:36:55.914631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.554 [2024-07-11 15:36:56.109700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.491 [2024-07-11 15:36:56.936061] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:43.491 [2024-07-11 15:36:56.936154] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:43.491 [2024-07-11 15:36:57.085424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.491 [2024-07-11 15:36:57.085479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:43.491 [2024-07-11 15:36:57.085504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:43.491 [2024-07-11 15:36:57.085516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.491 [2024-07-11 15:36:57.085601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.491 [2024-07-11 15:36:57.085634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:43.491 [2024-07-11 15:36:57.085660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:28:43.491 [2024-07-11 15:36:57.085685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.491 [2024-07-11 15:36:57.085715] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:43.491 [2024-07-11 15:36:57.086696] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:43.491 [2024-07-11 15:36:57.086738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.491 [2024-07-11 15:36:57.086753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:43.491 [2024-07-11 15:36:57.086765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.030 ms 00:28:43.491 [2024-07-11 15:36:57.086777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.491 [2024-07-11 15:36:57.087301] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:43.752 [2024-07-11 15:36:57.109204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.752 [2024-07-11 15:36:57.109246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:43.752 [2024-07-11 15:36:57.109279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.904 ms 00:28:43.752 [2024-07-11 15:36:57.109297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.752 [2024-07-11 15:36:57.122069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.752 [2024-07-11 15:36:57.122115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:43.752 [2024-07-11 15:36:57.122132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:28:43.752 [2024-07-11 15:36:57.122143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.752 [2024-07-11 15:36:57.122633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.752 [2024-07-11 15:36:57.122668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:43.752 [2024-07-11 15:36:57.122689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.384 ms 00:28:43.752 [2024-07-11 15:36:57.122701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.752 [2024-07-11 15:36:57.122771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.752 [2024-07-11 15:36:57.122791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:43.752 [2024-07-11 15:36:57.122804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:28:43.752 [2024-07-11 15:36:57.122815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.752 [2024-07-11 15:36:57.122857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.752 [2024-07-11 15:36:57.122873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:43.752 [2024-07-11 15:36:57.122885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:43.752 [2024-07-11 15:36:57.122900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.752 [2024-07-11 15:36:57.122935] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:43.752 [2024-07-11 15:36:57.127166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.752 [2024-07-11 15:36:57.127225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:43.752 [2024-07-11 15:36:57.127242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.238 ms 00:28:43.752 [2024-07-11 15:36:57.127253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.752 [2024-07-11 15:36:57.127294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.752 [2024-07-11 15:36:57.127312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:43.752 [2024-07-11 15:36:57.127324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:43.752 [2024-07-11 15:36:57.127335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.752 [2024-07-11 15:36:57.127383] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:43.752 [2024-07-11 15:36:57.127414] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:43.752 [2024-07-11 15:36:57.127459] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:43.752 [2024-07-11 15:36:57.127480] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:28:43.752 [2024-07-11 15:36:57.127598] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:43.752 [2024-07-11 15:36:57.127614] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:43.752 [2024-07-11 15:36:57.127643] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:28:43.752 [2024-07-11 15:36:57.127657] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:43.752 [2024-07-11 15:36:57.127670] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:43.752 [2024-07-11 15:36:57.127681] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:43.752 [2024-07-11 15:36:57.127691] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:43.752 [2024-07-11 15:36:57.127705] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:43.752 [2024-07-11 15:36:57.127715] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:43.752 [2024-07-11 15:36:57.127727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.752 [2024-07-11 15:36:57.127737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:43.752 [2024-07-11 15:36:57.127752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.347 ms 00:28:43.752 [2024-07-11 15:36:57.127762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.752 [2024-07-11 15:36:57.127849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.752 [2024-07-11 15:36:57.127864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:43.753 [2024-07-11 15:36:57.127875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:28:43.753 [2024-07-11 15:36:57.127885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.753 [2024-07-11 15:36:57.128030] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:43.753 [2024-07-11 15:36:57.128070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:43.753 [2024-07-11 15:36:57.128085] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:43.753 [2024-07-11 15:36:57.128097] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.753 [2024-07-11 15:36:57.128109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:43.753 [2024-07-11 15:36:57.128119] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:43.753 [2024-07-11 15:36:57.128130] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:43.753 [2024-07-11 15:36:57.128141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:43.753 [2024-07-11 15:36:57.128151] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:43.753 [2024-07-11 15:36:57.128161] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.753 [2024-07-11 15:36:57.128172] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:43.753 [2024-07-11 15:36:57.128183] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:43.753 [2024-07-11 15:36:57.128193] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.753 [2024-07-11 15:36:57.128204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:43.753 [2024-07-11 15:36:57.128214] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:43.753 [2024-07-11 15:36:57.128224] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.753 [2024-07-11 15:36:57.128234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:43.753 [2024-07-11 15:36:57.128245] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:43.753 [2024-07-11 15:36:57.128255] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.753 [2024-07-11 15:36:57.128265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:43.753 [2024-07-11 15:36:57.128276] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:43.753 [2024-07-11 15:36:57.128286] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:43.753 [2024-07-11 15:36:57.128296] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:43.753 [2024-07-11 15:36:57.128306] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:43.753 [2024-07-11 15:36:57.128316] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:43.753 [2024-07-11 15:36:57.128326] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:43.753 [2024-07-11 15:36:57.128336] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:43.753 [2024-07-11 15:36:57.128346] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:43.753 [2024-07-11 15:36:57.128356] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:43.753 [2024-07-11 15:36:57.128367] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:43.753 [2024-07-11 15:36:57.128377] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:43.753 [2024-07-11 15:36:57.128388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:43.753 [2024-07-11 15:36:57.128398] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:43.753 [2024-07-11 15:36:57.128408] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.753 [2024-07-11 15:36:57.128418] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:43.753 [2024-07-11 15:36:57.128429] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:43.753 [2024-07-11 15:36:57.128446] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.753 [2024-07-11 15:36:57.128465] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:43.753 [2024-07-11 15:36:57.128479] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:43.753 [2024-07-11 15:36:57.128490] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.753 [2024-07-11 15:36:57.128500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:43.753 [2024-07-11 15:36:57.128511] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:43.753 [2024-07-11 15:36:57.128521] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.753 [2024-07-11 15:36:57.128531] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:43.753 [2024-07-11 15:36:57.128542] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:43.753 [2024-07-11 15:36:57.128553] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:43.753 [2024-07-11 15:36:57.128564] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:43.753 [2024-07-11 15:36:57.128575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:43.753 [2024-07-11 15:36:57.128586] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:43.753 [2024-07-11 15:36:57.128611] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:43.753 [2024-07-11 15:36:57.128622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:43.753 [2024-07-11 15:36:57.128640] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:43.753 [2024-07-11 15:36:57.128650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:43.753 [2024-07-11 15:36:57.128662] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:43.753 [2024-07-11 15:36:57.128681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:43.753 [2024-07-11 15:36:57.128694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:43.753 [2024-07-11 15:36:57.128705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:43.753 [2024-07-11 15:36:57.128716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:43.753 [2024-07-11 15:36:57.128727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:43.753 [2024-07-11 15:36:57.128739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:43.753 [2024-07-11 15:36:57.128750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:43.753 [2024-07-11 15:36:57.128761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:43.753 [2024-07-11 15:36:57.128772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:43.753 [2024-07-11 15:36:57.128783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:43.753 [2024-07-11 15:36:57.128809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:43.753 [2024-07-11 15:36:57.128820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:43.753 [2024-07-11 15:36:57.128846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:43.753 [2024-07-11 15:36:57.128858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:43.753 [2024-07-11 15:36:57.128869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:43.753 [2024-07-11 15:36:57.128880] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:43.753 [2024-07-11 15:36:57.128892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:43.753 [2024-07-11 15:36:57.128904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:43.753 [2024-07-11 15:36:57.128915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:43.753 [2024-07-11 15:36:57.128927] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:43.753 [2024-07-11 15:36:57.128939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:43.753 [2024-07-11 15:36:57.128952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.753 [2024-07-11 15:36:57.128963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:43.753 [2024-07-11 15:36:57.128974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.019 ms 00:28:43.754 [2024-07-11 15:36:57.128986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.754 [2024-07-11 15:36:57.162237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.754 [2024-07-11 15:36:57.162290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:43.754 [2024-07-11 15:36:57.162310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.162 ms 00:28:43.754 [2024-07-11 15:36:57.162322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.754 [2024-07-11 15:36:57.162392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.754 [2024-07-11 15:36:57.162409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:43.754 [2024-07-11 15:36:57.162421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:43.754 [2024-07-11 15:36:57.162438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.754 [2024-07-11 15:36:57.203047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.754 [2024-07-11 15:36:57.203109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:43.754 [2024-07-11 15:36:57.203128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.520 ms 00:28:43.754 [2024-07-11 15:36:57.203140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.754 [2024-07-11 15:36:57.203212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.754 [2024-07-11 15:36:57.203235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:43.754 [2024-07-11 15:36:57.203248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:43.754 [2024-07-11 15:36:57.203259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.754 [2024-07-11 15:36:57.203410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.754 [2024-07-11 15:36:57.203454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:43.754 [2024-07-11 15:36:57.203467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:28:43.754 [2024-07-11 15:36:57.203478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.754 [2024-07-11 15:36:57.203535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.754 [2024-07-11 15:36:57.203552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:43.754 [2024-07-11 15:36:57.203569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:28:43.754 [2024-07-11 15:36:57.203580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.754 [2024-07-11 15:36:57.221987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.754 [2024-07-11 15:36:57.222062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:43.754 [2024-07-11 15:36:57.222095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.378 ms 00:28:43.754 [2024-07-11 15:36:57.222107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.754 [2024-07-11 15:36:57.222247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.754 [2024-07-11 15:36:57.222268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:43.754 [2024-07-11 15:36:57.222281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:43.754 [2024-07-11 15:36:57.222291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.754 [2024-07-11 15:36:57.260670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.754 [2024-07-11 15:36:57.260712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:43.754 [2024-07-11 15:36:57.260729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.354 ms 00:28:43.754 [2024-07-11 15:36:57.260740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.754 [2024-07-11 15:36:57.274150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.754 [2024-07-11 15:36:57.274192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:43.754 [2024-07-11 15:36:57.274209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.710 ms 00:28:43.754 [2024-07-11 15:36:57.274220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.754 [2024-07-11 15:36:57.362769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.754 [2024-07-11 15:36:57.362851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:43.754 [2024-07-11 15:36:57.362877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 88.472 ms 00:28:43.754 [2024-07-11 15:36:57.362892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.754 [2024-07-11 15:36:57.363235] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:43.754 [2024-07-11 15:36:57.363398] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:43.754 [2024-07-11 15:36:57.363583] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:43.754 [2024-07-11 15:36:57.363762] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:43.754 [2024-07-11 15:36:57.363796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.754 [2024-07-11 15:36:57.363812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:43.754 [2024-07-11 15:36:57.363829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.794 ms 00:28:43.754 [2024-07-11 15:36:57.363842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.754 [2024-07-11 15:36:57.363961] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:43.754 [2024-07-11 15:36:57.364002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.754 [2024-07-11 15:36:57.364058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:43.754 [2024-07-11 15:36:57.364086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:28:43.754 [2024-07-11 15:36:57.364101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.013 [2024-07-11 15:36:57.387421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.013 [2024-07-11 15:36:57.387472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:44.013 [2024-07-11 15:36:57.387494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.276 ms 00:28:44.013 [2024-07-11 15:36:57.387515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.013 [2024-07-11 15:36:57.401872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.013 [2024-07-11 15:36:57.401922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:44.013 [2024-07-11 15:36:57.401942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:44.013 [2024-07-11 15:36:57.401956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.013 [2024-07-11 15:36:57.402251] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:44.589 [2024-07-11 15:36:57.980806] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:44.590 [2024-07-11 15:36:57.981053] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:45.160 [2024-07-11 15:36:58.538258] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:45.160 [2024-07-11 15:36:58.538393] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:45.160 [2024-07-11 15:36:58.538425] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:45.160 [2024-07-11 15:36:58.538456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.160 [2024-07-11 15:36:58.538469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:45.160 [2024-07-11 15:36:58.538499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1136.360 ms 00:28:45.160 [2024-07-11 15:36:58.538524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.160 [2024-07-11 15:36:58.538583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.160 [2024-07-11 15:36:58.538598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:45.160 [2024-07-11 15:36:58.538626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:45.160 [2024-07-11 15:36:58.538636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.160 [2024-07-11 15:36:58.553010] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:45.160 [2024-07-11 15:36:58.553185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.160 [2024-07-11 15:36:58.553206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:45.160 [2024-07-11 15:36:58.553218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.506 ms 00:28:45.160 [2024-07-11 15:36:58.553228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.160 [2024-07-11 15:36:58.554136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.160 [2024-07-11 15:36:58.554177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:45.160 [2024-07-11 15:36:58.554193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.728 ms 00:28:45.160 [2024-07-11 15:36:58.554204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.160 [2024-07-11 15:36:58.556976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.160 [2024-07-11 15:36:58.557032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:45.160 [2024-07-11 15:36:58.557079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.736 ms 00:28:45.160 [2024-07-11 15:36:58.557089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.160 [2024-07-11 15:36:58.557168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.160 [2024-07-11 15:36:58.557200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:45.160 [2024-07-11 15:36:58.557213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:45.160 [2024-07-11 15:36:58.557223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.160 [2024-07-11 15:36:58.557386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.160 [2024-07-11 15:36:58.557410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:45.160 [2024-07-11 15:36:58.557428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:45.160 [2024-07-11 15:36:58.557440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.160 [2024-07-11 15:36:58.557475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.160 [2024-07-11 15:36:58.557491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:45.160 [2024-07-11 15:36:58.557515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:45.160 [2024-07-11 15:36:58.557535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.160 [2024-07-11 15:36:58.557583] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:45.160 [2024-07-11 15:36:58.557601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.160 [2024-07-11 15:36:58.557612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:45.160 [2024-07-11 15:36:58.557624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:28:45.160 [2024-07-11 15:36:58.557658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.161 [2024-07-11 15:36:58.557746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:45.161 [2024-07-11 15:36:58.557790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:45.161 [2024-07-11 15:36:58.557803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:28:45.161 [2024-07-11 15:36:58.557813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.161 [2024-07-11 15:36:58.558976] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1473.012 ms, result 0 00:28:45.161 [2024-07-11 15:36:58.573653] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:45.161 [2024-07-11 15:36:58.589613] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:45.161 [2024-07-11 15:36:58.599033] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:45.161 Validate MD5 checksum, iteration 1 00:28:45.161 15:36:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:45.161 15:36:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:28:45.161 15:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:45.161 15:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:45.161 15:36:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:45.161 15:36:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:45.161 15:36:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:45.161 15:36:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:45.161 15:36:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:45.161 15:36:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:45.161 15:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:45.161 15:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:45.161 15:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:45.161 15:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:45.161 15:36:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:45.161 [2024-07-11 15:36:58.743534] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:45.161 [2024-07-11 15:36:58.743702] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86270 ] 00:28:45.419 [2024-07-11 15:36:58.916862] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.676 [2024-07-11 15:36:59.151381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.046  Copying: 483/1024 [MB] (483 MBps) Copying: 929/1024 [MB] (446 MBps) Copying: 1024/1024 [MB] (average 466 MBps) 00:28:51.046 00:28:51.046 15:37:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:51.046 15:37:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:52.949 15:37:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:52.949 Validate MD5 checksum, iteration 2 00:28:52.949 15:37:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d495be9b1096b33c6c4d4786b0d94fed 00:28:52.949 15:37:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d495be9b1096b33c6c4d4786b0d94fed != \d\4\9\5\b\e\9\b\1\0\9\6\b\3\3\c\6\c\4\d\4\7\8\6\b\0\d\9\4\f\e\d ]] 00:28:52.949 15:37:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:52.949 15:37:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:52.949 15:37:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:52.949 15:37:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:52.949 15:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:52.949 15:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:52.949 15:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:52.949 15:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:52.949 15:37:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:52.949 [2024-07-11 15:37:06.277072] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:52.949 [2024-07-11 15:37:06.277260] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86344 ] 00:28:52.949 [2024-07-11 15:37:06.450243] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.207 [2024-07-11 15:37:06.656940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.215  Copying: 499/1024 [MB] (499 MBps) Copying: 991/1024 [MB] (492 MBps) Copying: 1024/1024 [MB] (average 493 MBps) 00:28:58.215 00:28:58.215 15:37:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:58.215 15:37:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ff5c5613ecd3f779fbfd40cc1c4613a5 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ff5c5613ecd3f779fbfd40cc1c4613a5 != \f\f\5\c\5\6\1\3\e\c\d\3\f\7\7\9\f\b\f\d\4\0\c\c\1\c\4\6\1\3\a\5 ]] 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 86235 ]] 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 86235 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 86235 ']' 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 86235 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86235 00:29:00.122 killing process with pid 86235 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86235' 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 86235 00:29:00.122 15:37:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 86235 00:29:01.058 [2024-07-11 15:37:14.537295] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:01.058 [2024-07-11 15:37:14.551553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.058 [2024-07-11 15:37:14.551612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:01.058 [2024-07-11 15:37:14.551646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:01.058 [2024-07-11 15:37:14.551657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.058 [2024-07-11 15:37:14.551686] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:01.058 [2024-07-11 15:37:14.554890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.058 [2024-07-11 15:37:14.554934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:01.058 [2024-07-11 15:37:14.554963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.183 ms 00:29:01.058 [2024-07-11 15:37:14.554974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.058 [2024-07-11 15:37:14.555275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.058 [2024-07-11 15:37:14.555296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:01.058 [2024-07-11 15:37:14.555316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.276 ms 00:29:01.058 [2024-07-11 15:37:14.555328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.058 [2024-07-11 15:37:14.556689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.058 [2024-07-11 15:37:14.556743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:01.058 [2024-07-11 15:37:14.556774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.340 ms 00:29:01.058 [2024-07-11 15:37:14.556785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.058 [2024-07-11 15:37:14.558148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.058 [2024-07-11 15:37:14.558181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:01.058 [2024-07-11 15:37:14.558195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.293 ms 00:29:01.058 [2024-07-11 15:37:14.558213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.058 [2024-07-11 15:37:14.569891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.058 [2024-07-11 15:37:14.569947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:01.058 [2024-07-11 15:37:14.569978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.628 ms 00:29:01.058 [2024-07-11 15:37:14.569989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.058 [2024-07-11 15:37:14.576080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.058 [2024-07-11 15:37:14.576133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:01.058 [2024-07-11 15:37:14.576171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.013 ms 00:29:01.058 [2024-07-11 15:37:14.576182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.058 [2024-07-11 15:37:14.576261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.058 [2024-07-11 15:37:14.576279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:01.058 [2024-07-11 15:37:14.576290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:29:01.058 [2024-07-11 15:37:14.576300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.058 [2024-07-11 15:37:14.587987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.058 [2024-07-11 15:37:14.588064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:29:01.058 [2024-07-11 15:37:14.588095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.665 ms 00:29:01.058 [2024-07-11 15:37:14.588106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.058 [2024-07-11 15:37:14.599494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.058 [2024-07-11 15:37:14.599543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:29:01.058 [2024-07-11 15:37:14.599573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.347 ms 00:29:01.058 [2024-07-11 15:37:14.599583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.058 [2024-07-11 15:37:14.611058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.058 [2024-07-11 15:37:14.611126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:01.058 [2024-07-11 15:37:14.611170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.437 ms 00:29:01.058 [2024-07-11 15:37:14.611181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.058 [2024-07-11 15:37:14.623608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.058 [2024-07-11 15:37:14.623660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:01.058 [2024-07-11 15:37:14.623690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.357 ms 00:29:01.058 [2024-07-11 15:37:14.623700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.058 [2024-07-11 15:37:14.623740] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:01.058 [2024-07-11 15:37:14.623762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:01.058 [2024-07-11 15:37:14.623776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:01.058 [2024-07-11 15:37:14.623791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:01.058 [2024-07-11 15:37:14.623802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:01.058 [2024-07-11 15:37:14.623813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:01.058 [2024-07-11 15:37:14.623824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:01.058 [2024-07-11 15:37:14.623834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:01.058 [2024-07-11 15:37:14.623844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:01.058 [2024-07-11 15:37:14.623855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:01.058 [2024-07-11 15:37:14.623865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:01.058 [2024-07-11 15:37:14.623876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:01.058 [2024-07-11 15:37:14.623886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:01.058 [2024-07-11 15:37:14.623913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:01.058 [2024-07-11 15:37:14.623940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:01.058 [2024-07-11 15:37:14.623967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:01.058 [2024-07-11 15:37:14.623979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:01.058 [2024-07-11 15:37:14.623990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:01.058 [2024-07-11 15:37:14.624002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:01.058 [2024-07-11 15:37:14.624015] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:01.058 [2024-07-11 15:37:14.624044] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b7000be0-aea9-4b0d-973e-ac7c5680efe3 00:29:01.058 [2024-07-11 15:37:14.624061] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:01.058 [2024-07-11 15:37:14.624091] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:01.058 [2024-07-11 15:37:14.624102] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:01.058 [2024-07-11 15:37:14.624113] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:01.058 [2024-07-11 15:37:14.624123] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:01.059 [2024-07-11 15:37:14.624134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:01.059 [2024-07-11 15:37:14.624145] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:01.059 [2024-07-11 15:37:14.624155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:01.059 [2024-07-11 15:37:14.624165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:01.059 [2024-07-11 15:37:14.624176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.059 [2024-07-11 15:37:14.624187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:01.059 [2024-07-11 15:37:14.624204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.438 ms 00:29:01.059 [2024-07-11 15:37:14.624217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.059 [2024-07-11 15:37:14.640856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.059 [2024-07-11 15:37:14.640909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:01.059 [2024-07-11 15:37:14.640941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.614 ms 00:29:01.059 [2024-07-11 15:37:14.640952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.059 [2024-07-11 15:37:14.641451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:01.059 [2024-07-11 15:37:14.641488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:01.059 [2024-07-11 15:37:14.641504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.456 ms 00:29:01.059 [2024-07-11 15:37:14.641516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.319 [2024-07-11 15:37:14.693507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.319 [2024-07-11 15:37:14.693578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:01.319 [2024-07-11 15:37:14.693611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.319 [2024-07-11 15:37:14.693622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.319 [2024-07-11 15:37:14.693678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.319 [2024-07-11 15:37:14.693693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:01.319 [2024-07-11 15:37:14.693705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.319 [2024-07-11 15:37:14.693716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.319 [2024-07-11 15:37:14.693853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.319 [2024-07-11 15:37:14.693888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:01.319 [2024-07-11 15:37:14.693901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.319 [2024-07-11 15:37:14.693912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.319 [2024-07-11 15:37:14.693937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.319 [2024-07-11 15:37:14.693952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:01.319 [2024-07-11 15:37:14.693964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.319 [2024-07-11 15:37:14.693975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.319 [2024-07-11 15:37:14.786028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.319 [2024-07-11 15:37:14.786098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:01.319 [2024-07-11 15:37:14.786118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.319 [2024-07-11 15:37:14.786130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.319 [2024-07-11 15:37:14.862455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.319 [2024-07-11 15:37:14.862528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:01.319 [2024-07-11 15:37:14.862561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.319 [2024-07-11 15:37:14.862572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.319 [2024-07-11 15:37:14.862676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.319 [2024-07-11 15:37:14.862693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:01.319 [2024-07-11 15:37:14.862704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.319 [2024-07-11 15:37:14.862715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.319 [2024-07-11 15:37:14.862783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.319 [2024-07-11 15:37:14.862831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:01.319 [2024-07-11 15:37:14.862844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.319 [2024-07-11 15:37:14.862855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.319 [2024-07-11 15:37:14.862975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.319 [2024-07-11 15:37:14.863002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:01.319 [2024-07-11 15:37:14.863015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.319 [2024-07-11 15:37:14.863026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.319 [2024-07-11 15:37:14.863094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.319 [2024-07-11 15:37:14.863118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:01.319 [2024-07-11 15:37:14.863132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.319 [2024-07-11 15:37:14.863143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.319 [2024-07-11 15:37:14.863188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.319 [2024-07-11 15:37:14.863210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:01.319 [2024-07-11 15:37:14.863222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.319 [2024-07-11 15:37:14.863233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.319 [2024-07-11 15:37:14.863285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:01.319 [2024-07-11 15:37:14.863302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:01.319 [2024-07-11 15:37:14.863314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:01.319 [2024-07-11 15:37:14.863331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:01.319 [2024-07-11 15:37:14.863474] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 311.889 ms, result 0 00:29:02.700 15:37:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:02.700 15:37:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:02.700 15:37:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:02.700 15:37:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:02.700 15:37:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:02.700 15:37:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:02.700 15:37:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:02.700 Remove shared memory files 00:29:02.700 15:37:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:02.700 15:37:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:02.700 15:37:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:02.700 15:37:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid86005 00:29:02.700 15:37:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:02.700 15:37:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:02.700 ************************************ 00:29:02.700 END TEST ftl_upgrade_shutdown 00:29:02.700 ************************************ 00:29:02.700 00:29:02.700 real 1m33.497s 00:29:02.700 user 2m13.718s 00:29:02.700 sys 0m22.598s 00:29:02.700 15:37:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:02.700 15:37:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:02.700 15:37:15 ftl -- common/autotest_common.sh@1142 -- # return 0 00:29:02.700 15:37:15 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:29:02.700 15:37:15 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:02.700 15:37:15 ftl -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:29:02.700 15:37:15 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:02.700 15:37:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:02.700 ************************************ 00:29:02.700 START TEST ftl_restore_fast 00:29:02.700 ************************************ 00:29:02.700 15:37:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:02.700 * Looking for test storage... 00:29:02.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.oVfTe7CQqi 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=86521 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 86521 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- common/autotest_common.sh@829 -- # '[' -z 86521 ']' 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:02.700 15:37:16 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:29:02.700 [2024-07-11 15:37:16.181058] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:02.700 [2024-07-11 15:37:16.181233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86521 ] 00:29:02.960 [2024-07-11 15:37:16.351539] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.960 [2024-07-11 15:37:16.506641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.529 15:37:17 ftl.ftl_restore_fast -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:03.529 15:37:17 ftl.ftl_restore_fast -- common/autotest_common.sh@862 -- # return 0 00:29:03.529 15:37:17 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:03.529 15:37:17 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:29:03.529 15:37:17 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:03.529 15:37:17 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:29:03.529 15:37:17 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:29:03.529 15:37:17 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:04.098 15:37:17 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:04.098 15:37:17 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:29:04.098 15:37:17 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:04.098 15:37:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:29:04.098 15:37:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:04.098 15:37:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:29:04.098 15:37:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:29:04.098 15:37:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:04.098 15:37:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:04.098 { 00:29:04.098 "name": "nvme0n1", 00:29:04.098 "aliases": [ 00:29:04.098 "eed1febd-87e3-487f-9346-963dfef54fac" 00:29:04.098 ], 00:29:04.098 "product_name": "NVMe disk", 00:29:04.098 "block_size": 4096, 00:29:04.098 "num_blocks": 1310720, 00:29:04.098 "uuid": "eed1febd-87e3-487f-9346-963dfef54fac", 00:29:04.098 "assigned_rate_limits": { 00:29:04.098 "rw_ios_per_sec": 0, 00:29:04.098 "rw_mbytes_per_sec": 0, 00:29:04.098 "r_mbytes_per_sec": 0, 00:29:04.098 "w_mbytes_per_sec": 0 00:29:04.098 }, 00:29:04.098 "claimed": true, 00:29:04.098 "claim_type": "read_many_write_one", 00:29:04.098 "zoned": false, 00:29:04.098 "supported_io_types": { 00:29:04.098 "read": true, 00:29:04.098 "write": true, 00:29:04.098 "unmap": true, 00:29:04.098 "flush": true, 00:29:04.098 "reset": true, 00:29:04.098 "nvme_admin": true, 00:29:04.098 "nvme_io": true, 00:29:04.098 "nvme_io_md": false, 00:29:04.098 "write_zeroes": true, 00:29:04.098 "zcopy": false, 00:29:04.098 "get_zone_info": false, 00:29:04.098 "zone_management": false, 00:29:04.098 "zone_append": false, 00:29:04.098 "compare": true, 00:29:04.098 "compare_and_write": false, 00:29:04.098 "abort": true, 00:29:04.098 "seek_hole": false, 00:29:04.098 "seek_data": false, 00:29:04.098 "copy": true, 00:29:04.098 "nvme_iov_md": false 00:29:04.098 }, 00:29:04.098 "driver_specific": { 00:29:04.098 "nvme": [ 00:29:04.098 { 00:29:04.098 "pci_address": "0000:00:11.0", 00:29:04.098 "trid": { 00:29:04.098 "trtype": "PCIe", 00:29:04.098 "traddr": "0000:00:11.0" 00:29:04.098 }, 00:29:04.098 "ctrlr_data": { 00:29:04.098 "cntlid": 0, 00:29:04.098 "vendor_id": "0x1b36", 00:29:04.098 "model_number": "QEMU NVMe Ctrl", 00:29:04.098 "serial_number": "12341", 00:29:04.098 "firmware_revision": "8.0.0", 00:29:04.098 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:04.098 "oacs": { 00:29:04.098 "security": 0, 00:29:04.098 "format": 1, 00:29:04.098 "firmware": 0, 00:29:04.098 "ns_manage": 1 00:29:04.098 }, 00:29:04.098 "multi_ctrlr": false, 00:29:04.098 "ana_reporting": false 00:29:04.098 }, 00:29:04.098 "vs": { 00:29:04.098 "nvme_version": "1.4" 00:29:04.098 }, 00:29:04.098 "ns_data": { 00:29:04.098 "id": 1, 00:29:04.099 "can_share": false 00:29:04.099 } 00:29:04.099 } 00:29:04.099 ], 00:29:04.099 "mp_policy": "active_passive" 00:29:04.099 } 00:29:04.099 } 00:29:04.099 ]' 00:29:04.099 15:37:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:04.358 15:37:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:29:04.358 15:37:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:04.358 15:37:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:29:04.358 15:37:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:29:04.358 15:37:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:29:04.358 15:37:17 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:29:04.358 15:37:17 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:04.358 15:37:17 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:29:04.358 15:37:17 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:04.358 15:37:17 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:04.617 15:37:18 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=830a70ac-ca68-48d0-bdbb-5198cffd462c 00:29:04.617 15:37:18 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:29:04.617 15:37:18 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 830a70ac-ca68-48d0-bdbb-5198cffd462c 00:29:04.877 15:37:18 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:05.136 15:37:18 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=a3864cd5-bbff-48c3-bbbf-f483bfe95047 00:29:05.136 15:37:18 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a3864cd5-bbff-48c3-bbbf-f483bfe95047 00:29:05.395 15:37:18 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=7cb3fe59-90bf-4519-8114-7f94975afca6 00:29:05.395 15:37:18 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:29:05.395 15:37:18 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7cb3fe59-90bf-4519-8114-7f94975afca6 00:29:05.395 15:37:18 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:29:05.395 15:37:18 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:05.395 15:37:18 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=7cb3fe59-90bf-4519-8114-7f94975afca6 00:29:05.395 15:37:18 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:29:05.395 15:37:18 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 7cb3fe59-90bf-4519-8114-7f94975afca6 00:29:05.395 15:37:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=7cb3fe59-90bf-4519-8114-7f94975afca6 00:29:05.395 15:37:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:05.395 15:37:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:29:05.395 15:37:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:29:05.395 15:37:18 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7cb3fe59-90bf-4519-8114-7f94975afca6 00:29:05.395 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:05.395 { 00:29:05.395 "name": "7cb3fe59-90bf-4519-8114-7f94975afca6", 00:29:05.395 "aliases": [ 00:29:05.395 "lvs/nvme0n1p0" 00:29:05.395 ], 00:29:05.395 "product_name": "Logical Volume", 00:29:05.395 "block_size": 4096, 00:29:05.395 "num_blocks": 26476544, 00:29:05.395 "uuid": "7cb3fe59-90bf-4519-8114-7f94975afca6", 00:29:05.395 "assigned_rate_limits": { 00:29:05.395 "rw_ios_per_sec": 0, 00:29:05.395 "rw_mbytes_per_sec": 0, 00:29:05.395 "r_mbytes_per_sec": 0, 00:29:05.395 "w_mbytes_per_sec": 0 00:29:05.395 }, 00:29:05.395 "claimed": false, 00:29:05.395 "zoned": false, 00:29:05.395 "supported_io_types": { 00:29:05.395 "read": true, 00:29:05.395 "write": true, 00:29:05.395 "unmap": true, 00:29:05.395 "flush": false, 00:29:05.395 "reset": true, 00:29:05.395 "nvme_admin": false, 00:29:05.395 "nvme_io": false, 00:29:05.395 "nvme_io_md": false, 00:29:05.395 "write_zeroes": true, 00:29:05.395 "zcopy": false, 00:29:05.395 "get_zone_info": false, 00:29:05.395 "zone_management": false, 00:29:05.395 "zone_append": false, 00:29:05.395 "compare": false, 00:29:05.395 "compare_and_write": false, 00:29:05.395 "abort": false, 00:29:05.395 "seek_hole": true, 00:29:05.395 "seek_data": true, 00:29:05.395 "copy": false, 00:29:05.395 "nvme_iov_md": false 00:29:05.395 }, 00:29:05.395 "driver_specific": { 00:29:05.395 "lvol": { 00:29:05.395 "lvol_store_uuid": "a3864cd5-bbff-48c3-bbbf-f483bfe95047", 00:29:05.395 "base_bdev": "nvme0n1", 00:29:05.395 "thin_provision": true, 00:29:05.395 "num_allocated_clusters": 0, 00:29:05.395 "snapshot": false, 00:29:05.395 "clone": false, 00:29:05.395 "esnap_clone": false 00:29:05.395 } 00:29:05.395 } 00:29:05.395 } 00:29:05.395 ]' 00:29:05.395 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:05.654 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:29:05.654 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:05.654 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:29:05.654 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:29:05.654 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:29:05.654 15:37:19 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:29:05.654 15:37:19 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:29:05.654 15:37:19 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:05.913 15:37:19 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:05.913 15:37:19 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:05.913 15:37:19 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 7cb3fe59-90bf-4519-8114-7f94975afca6 00:29:05.913 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=7cb3fe59-90bf-4519-8114-7f94975afca6 00:29:05.913 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:05.913 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:29:05.913 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:29:05.913 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7cb3fe59-90bf-4519-8114-7f94975afca6 00:29:06.173 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:06.173 { 00:29:06.173 "name": "7cb3fe59-90bf-4519-8114-7f94975afca6", 00:29:06.173 "aliases": [ 00:29:06.173 "lvs/nvme0n1p0" 00:29:06.173 ], 00:29:06.173 "product_name": "Logical Volume", 00:29:06.173 "block_size": 4096, 00:29:06.173 "num_blocks": 26476544, 00:29:06.173 "uuid": "7cb3fe59-90bf-4519-8114-7f94975afca6", 00:29:06.173 "assigned_rate_limits": { 00:29:06.173 "rw_ios_per_sec": 0, 00:29:06.173 "rw_mbytes_per_sec": 0, 00:29:06.173 "r_mbytes_per_sec": 0, 00:29:06.173 "w_mbytes_per_sec": 0 00:29:06.173 }, 00:29:06.173 "claimed": false, 00:29:06.173 "zoned": false, 00:29:06.173 "supported_io_types": { 00:29:06.173 "read": true, 00:29:06.173 "write": true, 00:29:06.173 "unmap": true, 00:29:06.173 "flush": false, 00:29:06.173 "reset": true, 00:29:06.173 "nvme_admin": false, 00:29:06.173 "nvme_io": false, 00:29:06.173 "nvme_io_md": false, 00:29:06.173 "write_zeroes": true, 00:29:06.173 "zcopy": false, 00:29:06.173 "get_zone_info": false, 00:29:06.173 "zone_management": false, 00:29:06.173 "zone_append": false, 00:29:06.173 "compare": false, 00:29:06.173 "compare_and_write": false, 00:29:06.173 "abort": false, 00:29:06.173 "seek_hole": true, 00:29:06.173 "seek_data": true, 00:29:06.173 "copy": false, 00:29:06.173 "nvme_iov_md": false 00:29:06.173 }, 00:29:06.173 "driver_specific": { 00:29:06.173 "lvol": { 00:29:06.173 "lvol_store_uuid": "a3864cd5-bbff-48c3-bbbf-f483bfe95047", 00:29:06.173 "base_bdev": "nvme0n1", 00:29:06.173 "thin_provision": true, 00:29:06.173 "num_allocated_clusters": 0, 00:29:06.173 "snapshot": false, 00:29:06.173 "clone": false, 00:29:06.173 "esnap_clone": false 00:29:06.173 } 00:29:06.173 } 00:29:06.173 } 00:29:06.173 ]' 00:29:06.173 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:06.173 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:29:06.173 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:06.173 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:29:06.173 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:29:06.173 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:29:06.173 15:37:19 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:29:06.173 15:37:19 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:06.432 15:37:19 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:29:06.432 15:37:19 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 7cb3fe59-90bf-4519-8114-7f94975afca6 00:29:06.432 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=7cb3fe59-90bf-4519-8114-7f94975afca6 00:29:06.432 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:06.432 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:29:06.432 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:29:06.432 15:37:19 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7cb3fe59-90bf-4519-8114-7f94975afca6 00:29:06.706 15:37:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:06.706 { 00:29:06.706 "name": "7cb3fe59-90bf-4519-8114-7f94975afca6", 00:29:06.706 "aliases": [ 00:29:06.706 "lvs/nvme0n1p0" 00:29:06.706 ], 00:29:06.706 "product_name": "Logical Volume", 00:29:06.706 "block_size": 4096, 00:29:06.706 "num_blocks": 26476544, 00:29:06.706 "uuid": "7cb3fe59-90bf-4519-8114-7f94975afca6", 00:29:06.706 "assigned_rate_limits": { 00:29:06.706 "rw_ios_per_sec": 0, 00:29:06.706 "rw_mbytes_per_sec": 0, 00:29:06.706 "r_mbytes_per_sec": 0, 00:29:06.706 "w_mbytes_per_sec": 0 00:29:06.706 }, 00:29:06.706 "claimed": false, 00:29:06.706 "zoned": false, 00:29:06.706 "supported_io_types": { 00:29:06.706 "read": true, 00:29:06.706 "write": true, 00:29:06.706 "unmap": true, 00:29:06.706 "flush": false, 00:29:06.706 "reset": true, 00:29:06.706 "nvme_admin": false, 00:29:06.706 "nvme_io": false, 00:29:06.706 "nvme_io_md": false, 00:29:06.706 "write_zeroes": true, 00:29:06.706 "zcopy": false, 00:29:06.706 "get_zone_info": false, 00:29:06.706 "zone_management": false, 00:29:06.706 "zone_append": false, 00:29:06.706 "compare": false, 00:29:06.706 "compare_and_write": false, 00:29:06.706 "abort": false, 00:29:06.706 "seek_hole": true, 00:29:06.706 "seek_data": true, 00:29:06.706 "copy": false, 00:29:06.706 "nvme_iov_md": false 00:29:06.706 }, 00:29:06.706 "driver_specific": { 00:29:06.706 "lvol": { 00:29:06.706 "lvol_store_uuid": "a3864cd5-bbff-48c3-bbbf-f483bfe95047", 00:29:06.706 "base_bdev": "nvme0n1", 00:29:06.706 "thin_provision": true, 00:29:06.706 "num_allocated_clusters": 0, 00:29:06.706 "snapshot": false, 00:29:06.706 "clone": false, 00:29:06.706 "esnap_clone": false 00:29:06.706 } 00:29:06.706 } 00:29:06.706 } 00:29:06.706 ]' 00:29:06.706 15:37:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:06.706 15:37:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:29:06.706 15:37:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:06.706 15:37:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:29:06.706 15:37:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:29:06.706 15:37:20 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:29:06.706 15:37:20 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:29:06.706 15:37:20 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 7cb3fe59-90bf-4519-8114-7f94975afca6 --l2p_dram_limit 10' 00:29:06.706 15:37:20 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:29:06.706 15:37:20 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:29:06.706 15:37:20 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:06.706 15:37:20 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:29:06.706 15:37:20 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:29:06.707 15:37:20 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7cb3fe59-90bf-4519-8114-7f94975afca6 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:29:06.978 [2024-07-11 15:37:20.484099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.978 [2024-07-11 15:37:20.484211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:06.978 [2024-07-11 15:37:20.484231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:06.978 [2024-07-11 15:37:20.484244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.978 [2024-07-11 15:37:20.484318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.978 [2024-07-11 15:37:20.484337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:06.978 [2024-07-11 15:37:20.484349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:29:06.978 [2024-07-11 15:37:20.484360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.978 [2024-07-11 15:37:20.484386] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:06.978 [2024-07-11 15:37:20.485269] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:06.978 [2024-07-11 15:37:20.485293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.978 [2024-07-11 15:37:20.485309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:06.978 [2024-07-11 15:37:20.485320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:29:06.978 [2024-07-11 15:37:20.485332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.978 [2024-07-11 15:37:20.485452] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c8a1d95c-c33b-4de9-9bd6-bd82ad45b6a1 00:29:06.978 [2024-07-11 15:37:20.486541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.978 [2024-07-11 15:37:20.486569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:06.978 [2024-07-11 15:37:20.486585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:06.978 [2024-07-11 15:37:20.486596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.978 [2024-07-11 15:37:20.491356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.978 [2024-07-11 15:37:20.491442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:06.978 [2024-07-11 15:37:20.491481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.704 ms 00:29:06.978 [2024-07-11 15:37:20.491492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.978 [2024-07-11 15:37:20.491625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.978 [2024-07-11 15:37:20.491645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:06.978 [2024-07-11 15:37:20.491659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:29:06.978 [2024-07-11 15:37:20.491669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.978 [2024-07-11 15:37:20.491771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.978 [2024-07-11 15:37:20.491788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:06.978 [2024-07-11 15:37:20.491801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:06.978 [2024-07-11 15:37:20.491814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.978 [2024-07-11 15:37:20.491847] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:06.978 [2024-07-11 15:37:20.495965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.978 [2024-07-11 15:37:20.496066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:06.978 [2024-07-11 15:37:20.496083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.130 ms 00:29:06.978 [2024-07-11 15:37:20.496097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.978 [2024-07-11 15:37:20.496159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.978 [2024-07-11 15:37:20.496176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:06.978 [2024-07-11 15:37:20.496187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:06.978 [2024-07-11 15:37:20.496199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.978 [2024-07-11 15:37:20.496250] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:06.978 [2024-07-11 15:37:20.496414] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:06.978 [2024-07-11 15:37:20.496431] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:06.978 [2024-07-11 15:37:20.496449] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:06.978 [2024-07-11 15:37:20.496464] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:06.978 [2024-07-11 15:37:20.496478] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:06.978 [2024-07-11 15:37:20.496489] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:06.978 [2024-07-11 15:37:20.496500] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:06.978 [2024-07-11 15:37:20.496517] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:06.978 [2024-07-11 15:37:20.496529] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:06.978 [2024-07-11 15:37:20.496540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.978 [2024-07-11 15:37:20.496567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:06.978 [2024-07-11 15:37:20.496578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:29:06.978 [2024-07-11 15:37:20.496589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.978 [2024-07-11 15:37:20.496669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.978 [2024-07-11 15:37:20.496685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:06.978 [2024-07-11 15:37:20.496696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:06.978 [2024-07-11 15:37:20.496707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.978 [2024-07-11 15:37:20.496804] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:06.978 [2024-07-11 15:37:20.496824] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:06.978 [2024-07-11 15:37:20.496846] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:06.978 [2024-07-11 15:37:20.496859] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.978 [2024-07-11 15:37:20.496870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:06.978 [2024-07-11 15:37:20.496881] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:06.978 [2024-07-11 15:37:20.496890] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:06.978 [2024-07-11 15:37:20.496901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:06.978 [2024-07-11 15:37:20.496910] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:06.978 [2024-07-11 15:37:20.496920] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:06.978 [2024-07-11 15:37:20.496929] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:06.978 [2024-07-11 15:37:20.496940] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:06.978 [2024-07-11 15:37:20.496949] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:06.978 [2024-07-11 15:37:20.496961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:06.978 [2024-07-11 15:37:20.496971] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:06.978 [2024-07-11 15:37:20.496981] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.978 [2024-07-11 15:37:20.496990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:06.978 [2024-07-11 15:37:20.497004] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:06.978 [2024-07-11 15:37:20.497013] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.978 [2024-07-11 15:37:20.497024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:06.978 [2024-07-11 15:37:20.497033] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:06.978 [2024-07-11 15:37:20.497044] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:06.978 [2024-07-11 15:37:20.497053] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:06.978 [2024-07-11 15:37:20.497076] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:06.978 [2024-07-11 15:37:20.497089] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:06.978 [2024-07-11 15:37:20.497100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:06.978 [2024-07-11 15:37:20.497108] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:06.978 [2024-07-11 15:37:20.497119] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:06.978 [2024-07-11 15:37:20.497128] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:06.978 [2024-07-11 15:37:20.497138] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:06.978 [2024-07-11 15:37:20.497147] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:06.978 [2024-07-11 15:37:20.497157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:06.978 [2024-07-11 15:37:20.497167] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:06.978 [2024-07-11 15:37:20.497179] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:06.978 [2024-07-11 15:37:20.497188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:06.978 [2024-07-11 15:37:20.497199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:06.978 [2024-07-11 15:37:20.497207] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:06.978 [2024-07-11 15:37:20.497218] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:06.978 [2024-07-11 15:37:20.497227] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:06.978 [2024-07-11 15:37:20.497239] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.978 [2024-07-11 15:37:20.497248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:06.978 [2024-07-11 15:37:20.497258] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:06.978 [2024-07-11 15:37:20.497267] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.978 [2024-07-11 15:37:20.497277] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:06.978 [2024-07-11 15:37:20.497287] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:06.978 [2024-07-11 15:37:20.497298] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:06.978 [2024-07-11 15:37:20.497308] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.978 [2024-07-11 15:37:20.497319] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:06.978 [2024-07-11 15:37:20.497329] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:06.978 [2024-07-11 15:37:20.497342] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:06.978 [2024-07-11 15:37:20.497351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:06.978 [2024-07-11 15:37:20.497362] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:06.979 [2024-07-11 15:37:20.497371] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:06.979 [2024-07-11 15:37:20.497386] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:06.979 [2024-07-11 15:37:20.497398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:06.979 [2024-07-11 15:37:20.497414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:06.979 [2024-07-11 15:37:20.497424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:06.979 [2024-07-11 15:37:20.497435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:06.979 [2024-07-11 15:37:20.497445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:06.979 [2024-07-11 15:37:20.497456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:06.979 [2024-07-11 15:37:20.497466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:06.979 [2024-07-11 15:37:20.497477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:06.979 [2024-07-11 15:37:20.497487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:06.979 [2024-07-11 15:37:20.497500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:06.979 [2024-07-11 15:37:20.497510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:06.979 [2024-07-11 15:37:20.497523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:06.979 [2024-07-11 15:37:20.497533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:06.979 [2024-07-11 15:37:20.497545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:06.979 [2024-07-11 15:37:20.497555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:06.979 [2024-07-11 15:37:20.497566] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:06.979 [2024-07-11 15:37:20.497577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:06.979 [2024-07-11 15:37:20.497589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:06.979 [2024-07-11 15:37:20.497600] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:06.979 [2024-07-11 15:37:20.497611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:06.979 [2024-07-11 15:37:20.497621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:06.979 [2024-07-11 15:37:20.497633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.979 [2024-07-11 15:37:20.497643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:06.979 [2024-07-11 15:37:20.497656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.884 ms 00:29:06.979 [2024-07-11 15:37:20.497666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.979 [2024-07-11 15:37:20.497716] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:06.979 [2024-07-11 15:37:20.497732] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:09.514 [2024-07-11 15:37:22.623994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.514 [2024-07-11 15:37:22.624082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:09.514 [2024-07-11 15:37:22.624119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2126.283 ms 00:29:09.514 [2024-07-11 15:37:22.624130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.514 [2024-07-11 15:37:22.654837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.514 [2024-07-11 15:37:22.654902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:09.514 [2024-07-11 15:37:22.654939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.380 ms 00:29:09.514 [2024-07-11 15:37:22.654951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.514 [2024-07-11 15:37:22.655151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.514 [2024-07-11 15:37:22.655172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:09.514 [2024-07-11 15:37:22.655187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:29:09.514 [2024-07-11 15:37:22.655201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.514 [2024-07-11 15:37:22.691495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.514 [2024-07-11 15:37:22.691553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:09.514 [2024-07-11 15:37:22.691588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.208 ms 00:29:09.514 [2024-07-11 15:37:22.691599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.514 [2024-07-11 15:37:22.691649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.514 [2024-07-11 15:37:22.691670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:09.514 [2024-07-11 15:37:22.691683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:09.514 [2024-07-11 15:37:22.691693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.514 [2024-07-11 15:37:22.692094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.514 [2024-07-11 15:37:22.692112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:09.514 [2024-07-11 15:37:22.692127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:29:09.514 [2024-07-11 15:37:22.692137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.514 [2024-07-11 15:37:22.692301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.514 [2024-07-11 15:37:22.692318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:09.514 [2024-07-11 15:37:22.692334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:29:09.514 [2024-07-11 15:37:22.692344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.514 [2024-07-11 15:37:22.707786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.514 [2024-07-11 15:37:22.707863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:09.514 [2024-07-11 15:37:22.707899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.413 ms 00:29:09.514 [2024-07-11 15:37:22.707910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.514 [2024-07-11 15:37:22.720351] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:09.514 [2024-07-11 15:37:22.723038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.514 [2024-07-11 15:37:22.723108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:09.514 [2024-07-11 15:37:22.723126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.990 ms 00:29:09.514 [2024-07-11 15:37:22.723154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.514 [2024-07-11 15:37:22.785943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.514 [2024-07-11 15:37:22.786088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:09.514 [2024-07-11 15:37:22.786111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.738 ms 00:29:09.514 [2024-07-11 15:37:22.786125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.514 [2024-07-11 15:37:22.786353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.514 [2024-07-11 15:37:22.786378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:09.514 [2024-07-11 15:37:22.786391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:29:09.514 [2024-07-11 15:37:22.786421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.514 [2024-07-11 15:37:22.816896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.514 [2024-07-11 15:37:22.816969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:09.514 [2024-07-11 15:37:22.816986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.414 ms 00:29:09.514 [2024-07-11 15:37:22.816999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.514 [2024-07-11 15:37:22.845421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.514 [2024-07-11 15:37:22.845462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:09.514 [2024-07-11 15:37:22.845494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.364 ms 00:29:09.514 [2024-07-11 15:37:22.845506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.514 [2024-07-11 15:37:22.846306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.514 [2024-07-11 15:37:22.846367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:09.514 [2024-07-11 15:37:22.846383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:29:09.514 [2024-07-11 15:37:22.846414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.514 [2024-07-11 15:37:22.928637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.514 [2024-07-11 15:37:22.928715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:09.514 [2024-07-11 15:37:22.928734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.149 ms 00:29:09.514 [2024-07-11 15:37:22.928751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.514 [2024-07-11 15:37:22.958072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.514 [2024-07-11 15:37:22.958135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:09.514 [2024-07-11 15:37:22.958156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.266 ms 00:29:09.514 [2024-07-11 15:37:22.958169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.514 [2024-07-11 15:37:22.987256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.514 [2024-07-11 15:37:22.987349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:09.514 [2024-07-11 15:37:22.987368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.031 ms 00:29:09.514 [2024-07-11 15:37:22.987380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.514 [2024-07-11 15:37:23.016553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.514 [2024-07-11 15:37:23.016628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:09.514 [2024-07-11 15:37:23.016647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.080 ms 00:29:09.514 [2024-07-11 15:37:23.016659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.514 [2024-07-11 15:37:23.016723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.514 [2024-07-11 15:37:23.016744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:09.514 [2024-07-11 15:37:23.016756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:29:09.514 [2024-07-11 15:37:23.016771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.514 [2024-07-11 15:37:23.016875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.514 [2024-07-11 15:37:23.016927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:09.514 [2024-07-11 15:37:23.016942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:09.514 [2024-07-11 15:37:23.016955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.514 [2024-07-11 15:37:23.018300] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2533.606 ms, result 0 00:29:09.514 { 00:29:09.514 "name": "ftl0", 00:29:09.514 "uuid": "c8a1d95c-c33b-4de9-9bd6-bd82ad45b6a1" 00:29:09.514 } 00:29:09.514 15:37:23 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:29:09.514 15:37:23 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:09.772 15:37:23 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:29:09.772 15:37:23 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:10.030 [2024-07-11 15:37:23.577564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.030 [2024-07-11 15:37:23.577640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:10.030 [2024-07-11 15:37:23.577679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:10.030 [2024-07-11 15:37:23.577690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.030 [2024-07-11 15:37:23.577726] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:10.030 [2024-07-11 15:37:23.580797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.030 [2024-07-11 15:37:23.580846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:10.030 [2024-07-11 15:37:23.580877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.049 ms 00:29:10.030 [2024-07-11 15:37:23.580888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.030 [2024-07-11 15:37:23.581191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.030 [2024-07-11 15:37:23.581225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:10.030 [2024-07-11 15:37:23.581251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:29:10.030 [2024-07-11 15:37:23.581264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.030 [2024-07-11 15:37:23.584200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.030 [2024-07-11 15:37:23.584231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:10.030 [2024-07-11 15:37:23.584261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.913 ms 00:29:10.030 [2024-07-11 15:37:23.584273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.030 [2024-07-11 15:37:23.589888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.030 [2024-07-11 15:37:23.589936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:10.030 [2024-07-11 15:37:23.589983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.593 ms 00:29:10.030 [2024-07-11 15:37:23.590001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.030 [2024-07-11 15:37:23.616968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.030 [2024-07-11 15:37:23.617064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:10.030 [2024-07-11 15:37:23.617082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.842 ms 00:29:10.030 [2024-07-11 15:37:23.617094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.030 [2024-07-11 15:37:23.633149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.030 [2024-07-11 15:37:23.633208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:10.030 [2024-07-11 15:37:23.633224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.997 ms 00:29:10.030 [2024-07-11 15:37:23.633237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.030 [2024-07-11 15:37:23.633393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.030 [2024-07-11 15:37:23.633417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:10.030 [2024-07-11 15:37:23.633429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:29:10.030 [2024-07-11 15:37:23.633441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.289 [2024-07-11 15:37:23.661685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.289 [2024-07-11 15:37:23.661740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:10.289 [2024-07-11 15:37:23.661771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.191 ms 00:29:10.289 [2024-07-11 15:37:23.661783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.289 [2024-07-11 15:37:23.691782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.289 [2024-07-11 15:37:23.691863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:10.289 [2024-07-11 15:37:23.691880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.955 ms 00:29:10.289 [2024-07-11 15:37:23.691893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.289 [2024-07-11 15:37:23.720594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.289 [2024-07-11 15:37:23.720684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:10.289 [2024-07-11 15:37:23.720717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.654 ms 00:29:10.289 [2024-07-11 15:37:23.720740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.289 [2024-07-11 15:37:23.746792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.289 [2024-07-11 15:37:23.746864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:10.289 [2024-07-11 15:37:23.746879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.930 ms 00:29:10.289 [2024-07-11 15:37:23.746891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.289 [2024-07-11 15:37:23.746934] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:10.289 [2024-07-11 15:37:23.746960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.746975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.746987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.746998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:10.289 [2024-07-11 15:37:23.747377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.747985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:10.290 [2024-07-11 15:37:23.748294] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:10.290 [2024-07-11 15:37:23.748308] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c8a1d95c-c33b-4de9-9bd6-bd82ad45b6a1 00:29:10.290 [2024-07-11 15:37:23.748321] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:10.290 [2024-07-11 15:37:23.748331] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:10.290 [2024-07-11 15:37:23.748344] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:10.290 [2024-07-11 15:37:23.748355] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:10.290 [2024-07-11 15:37:23.748366] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:10.290 [2024-07-11 15:37:23.748377] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:10.290 [2024-07-11 15:37:23.748388] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:10.290 [2024-07-11 15:37:23.748398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:10.290 [2024-07-11 15:37:23.748408] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:10.290 [2024-07-11 15:37:23.748419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.290 [2024-07-11 15:37:23.748431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:10.290 [2024-07-11 15:37:23.748442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.486 ms 00:29:10.290 [2024-07-11 15:37:23.748455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.290 [2024-07-11 15:37:23.763670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.290 [2024-07-11 15:37:23.763715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:10.290 [2024-07-11 15:37:23.763733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.156 ms 00:29:10.290 [2024-07-11 15:37:23.763747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.290 [2024-07-11 15:37:23.764204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.290 [2024-07-11 15:37:23.764228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:10.290 [2024-07-11 15:37:23.764243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:29:10.290 [2024-07-11 15:37:23.764259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.290 [2024-07-11 15:37:23.809614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:10.290 [2024-07-11 15:37:23.809698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:10.290 [2024-07-11 15:37:23.809716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:10.290 [2024-07-11 15:37:23.809729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.290 [2024-07-11 15:37:23.809811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:10.290 [2024-07-11 15:37:23.809827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:10.290 [2024-07-11 15:37:23.809838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:10.290 [2024-07-11 15:37:23.809853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.290 [2024-07-11 15:37:23.810020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:10.290 [2024-07-11 15:37:23.810092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:10.290 [2024-07-11 15:37:23.810107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:10.290 [2024-07-11 15:37:23.810120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.290 [2024-07-11 15:37:23.810148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:10.291 [2024-07-11 15:37:23.810168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:10.291 [2024-07-11 15:37:23.810180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:10.291 [2024-07-11 15:37:23.810193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.291 [2024-07-11 15:37:23.898975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:10.291 [2024-07-11 15:37:23.899069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:10.291 [2024-07-11 15:37:23.899087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:10.291 [2024-07-11 15:37:23.899115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.550 [2024-07-11 15:37:23.974283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:10.550 [2024-07-11 15:37:23.974352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:10.550 [2024-07-11 15:37:23.974372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:10.550 [2024-07-11 15:37:23.974389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.550 [2024-07-11 15:37:23.974524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:10.550 [2024-07-11 15:37:23.974546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:10.550 [2024-07-11 15:37:23.974559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:10.550 [2024-07-11 15:37:23.974572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.550 [2024-07-11 15:37:23.974665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:10.550 [2024-07-11 15:37:23.974719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:10.550 [2024-07-11 15:37:23.974733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:10.550 [2024-07-11 15:37:23.974747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.550 [2024-07-11 15:37:23.974873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:10.550 [2024-07-11 15:37:23.974894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:10.550 [2024-07-11 15:37:23.974907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:10.550 [2024-07-11 15:37:23.974920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.550 [2024-07-11 15:37:23.974972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:10.550 [2024-07-11 15:37:23.974994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:10.550 [2024-07-11 15:37:23.975006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:10.550 [2024-07-11 15:37:23.975019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.550 [2024-07-11 15:37:23.975086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:10.550 [2024-07-11 15:37:23.975104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:10.550 [2024-07-11 15:37:23.975131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:10.550 [2024-07-11 15:37:23.975200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.550 [2024-07-11 15:37:23.975260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:10.550 [2024-07-11 15:37:23.975284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:10.550 [2024-07-11 15:37:23.975298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:10.550 [2024-07-11 15:37:23.975326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.550 [2024-07-11 15:37:23.975497] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 397.880 ms, result 0 00:29:10.550 true 00:29:10.550 15:37:23 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 86521 00:29:10.550 15:37:23 ftl.ftl_restore_fast -- common/autotest_common.sh@948 -- # '[' -z 86521 ']' 00:29:10.550 15:37:23 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # kill -0 86521 00:29:10.550 15:37:23 ftl.ftl_restore_fast -- common/autotest_common.sh@953 -- # uname 00:29:10.550 15:37:24 ftl.ftl_restore_fast -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:10.550 15:37:24 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86521 00:29:10.550 15:37:24 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:10.550 killing process with pid 86521 00:29:10.550 15:37:24 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:10.550 15:37:24 ftl.ftl_restore_fast -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86521' 00:29:10.550 15:37:24 ftl.ftl_restore_fast -- common/autotest_common.sh@967 -- # kill 86521 00:29:10.550 15:37:24 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # wait 86521 00:29:15.823 15:37:28 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:29:20.013 262144+0 records in 00:29:20.013 262144+0 records out 00:29:20.013 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.23473 s, 254 MB/s 00:29:20.013 15:37:32 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:21.387 15:37:34 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:21.387 [2024-07-11 15:37:34.899872] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:21.387 [2024-07-11 15:37:34.900044] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86745 ] 00:29:21.646 [2024-07-11 15:37:35.052075] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.646 [2024-07-11 15:37:35.199605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.904 [2024-07-11 15:37:35.454426] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:21.904 [2024-07-11 15:37:35.454523] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:22.164 [2024-07-11 15:37:35.612964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.164 [2024-07-11 15:37:35.613059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:22.164 [2024-07-11 15:37:35.613080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:22.164 [2024-07-11 15:37:35.613091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.164 [2024-07-11 15:37:35.613157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.164 [2024-07-11 15:37:35.613176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:22.164 [2024-07-11 15:37:35.613187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:29:22.164 [2024-07-11 15:37:35.613200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.164 [2024-07-11 15:37:35.613228] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:22.164 [2024-07-11 15:37:35.614110] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:22.164 [2024-07-11 15:37:35.614141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.164 [2024-07-11 15:37:35.614158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:22.164 [2024-07-11 15:37:35.614171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.919 ms 00:29:22.164 [2024-07-11 15:37:35.614181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.164 [2024-07-11 15:37:35.615348] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:22.164 [2024-07-11 15:37:35.629407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.164 [2024-07-11 15:37:35.629460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:22.164 [2024-07-11 15:37:35.629491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.060 ms 00:29:22.164 [2024-07-11 15:37:35.629501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.164 [2024-07-11 15:37:35.629564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.164 [2024-07-11 15:37:35.629582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:22.164 [2024-07-11 15:37:35.629596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:29:22.164 [2024-07-11 15:37:35.629606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.164 [2024-07-11 15:37:35.633630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.164 [2024-07-11 15:37:35.633682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:22.164 [2024-07-11 15:37:35.633711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.944 ms 00:29:22.164 [2024-07-11 15:37:35.633722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.164 [2024-07-11 15:37:35.633805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.164 [2024-07-11 15:37:35.633824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:22.164 [2024-07-11 15:37:35.633836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:29:22.164 [2024-07-11 15:37:35.633845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.164 [2024-07-11 15:37:35.633907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.164 [2024-07-11 15:37:35.633925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:22.164 [2024-07-11 15:37:35.633937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:29:22.164 [2024-07-11 15:37:35.633946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.164 [2024-07-11 15:37:35.633976] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:22.164 [2024-07-11 15:37:35.637693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.164 [2024-07-11 15:37:35.637740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:22.164 [2024-07-11 15:37:35.637770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.724 ms 00:29:22.164 [2024-07-11 15:37:35.637780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.164 [2024-07-11 15:37:35.637821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.164 [2024-07-11 15:37:35.637835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:22.164 [2024-07-11 15:37:35.637846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:22.164 [2024-07-11 15:37:35.637855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.164 [2024-07-11 15:37:35.637894] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:22.164 [2024-07-11 15:37:35.637922] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:22.164 [2024-07-11 15:37:35.637960] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:22.164 [2024-07-11 15:37:35.637980] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:22.164 [2024-07-11 15:37:35.638145] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:22.165 [2024-07-11 15:37:35.638175] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:22.165 [2024-07-11 15:37:35.638190] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:22.165 [2024-07-11 15:37:35.638205] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:22.165 [2024-07-11 15:37:35.638217] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:22.165 [2024-07-11 15:37:35.638229] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:22.165 [2024-07-11 15:37:35.638239] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:22.165 [2024-07-11 15:37:35.638249] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:22.165 [2024-07-11 15:37:35.638259] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:22.165 [2024-07-11 15:37:35.638270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.165 [2024-07-11 15:37:35.638286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:22.165 [2024-07-11 15:37:35.638298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:29:22.165 [2024-07-11 15:37:35.638308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.165 [2024-07-11 15:37:35.638405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.165 [2024-07-11 15:37:35.638433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:22.165 [2024-07-11 15:37:35.638444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:29:22.165 [2024-07-11 15:37:35.638454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.165 [2024-07-11 15:37:35.638551] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:22.165 [2024-07-11 15:37:35.638567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:22.165 [2024-07-11 15:37:35.638584] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:22.165 [2024-07-11 15:37:35.638594] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:22.165 [2024-07-11 15:37:35.638604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:22.165 [2024-07-11 15:37:35.638613] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:22.165 [2024-07-11 15:37:35.638622] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:22.165 [2024-07-11 15:37:35.638632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:22.165 [2024-07-11 15:37:35.638641] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:22.165 [2024-07-11 15:37:35.638650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:22.165 [2024-07-11 15:37:35.638659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:22.165 [2024-07-11 15:37:35.638668] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:22.165 [2024-07-11 15:37:35.638677] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:22.165 [2024-07-11 15:37:35.638687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:22.165 [2024-07-11 15:37:35.638697] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:22.165 [2024-07-11 15:37:35.638706] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:22.165 [2024-07-11 15:37:35.638715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:22.165 [2024-07-11 15:37:35.638725] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:22.165 [2024-07-11 15:37:35.638734] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:22.165 [2024-07-11 15:37:35.638743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:22.165 [2024-07-11 15:37:35.638764] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:22.165 [2024-07-11 15:37:35.638773] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:22.165 [2024-07-11 15:37:35.638783] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:22.165 [2024-07-11 15:37:35.638792] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:22.165 [2024-07-11 15:37:35.638802] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:22.165 [2024-07-11 15:37:35.638810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:22.165 [2024-07-11 15:37:35.638819] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:22.165 [2024-07-11 15:37:35.638828] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:22.165 [2024-07-11 15:37:35.638837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:22.165 [2024-07-11 15:37:35.638846] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:22.165 [2024-07-11 15:37:35.638855] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:22.165 [2024-07-11 15:37:35.638864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:22.165 [2024-07-11 15:37:35.638873] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:22.165 [2024-07-11 15:37:35.638882] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:22.165 [2024-07-11 15:37:35.638891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:22.165 [2024-07-11 15:37:35.638900] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:22.165 [2024-07-11 15:37:35.638909] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:22.165 [2024-07-11 15:37:35.638918] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:22.165 [2024-07-11 15:37:35.638928] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:22.165 [2024-07-11 15:37:35.638936] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:22.165 [2024-07-11 15:37:35.638946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:22.165 [2024-07-11 15:37:35.638955] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:22.165 [2024-07-11 15:37:35.638963] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:22.165 [2024-07-11 15:37:35.638972] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:22.165 [2024-07-11 15:37:35.638982] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:22.165 [2024-07-11 15:37:35.638994] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:22.165 [2024-07-11 15:37:35.639004] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:22.165 [2024-07-11 15:37:35.639014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:22.165 [2024-07-11 15:37:35.639023] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:22.165 [2024-07-11 15:37:35.639032] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:22.165 [2024-07-11 15:37:35.639042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:22.165 [2024-07-11 15:37:35.639066] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:22.165 [2024-07-11 15:37:35.639077] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:22.165 [2024-07-11 15:37:35.639088] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:22.165 [2024-07-11 15:37:35.639100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:22.165 [2024-07-11 15:37:35.639113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:22.165 [2024-07-11 15:37:35.639123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:22.165 [2024-07-11 15:37:35.639133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:22.165 [2024-07-11 15:37:35.639143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:22.165 [2024-07-11 15:37:35.639154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:22.165 [2024-07-11 15:37:35.639164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:22.165 [2024-07-11 15:37:35.639178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:22.165 [2024-07-11 15:37:35.639188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:22.165 [2024-07-11 15:37:35.639198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:22.165 [2024-07-11 15:37:35.639208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:22.165 [2024-07-11 15:37:35.639219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:22.165 [2024-07-11 15:37:35.639228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:22.165 [2024-07-11 15:37:35.639238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:22.165 [2024-07-11 15:37:35.639248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:22.165 [2024-07-11 15:37:35.639274] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:22.165 [2024-07-11 15:37:35.639286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:22.165 [2024-07-11 15:37:35.639297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:22.165 [2024-07-11 15:37:35.639307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:22.165 [2024-07-11 15:37:35.639318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:22.165 [2024-07-11 15:37:35.639329] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:22.165 [2024-07-11 15:37:35.639340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.165 [2024-07-11 15:37:35.639356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:22.165 [2024-07-11 15:37:35.639370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.847 ms 00:29:22.165 [2024-07-11 15:37:35.639381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.165 [2024-07-11 15:37:35.677779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.165 [2024-07-11 15:37:35.677843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:22.165 [2024-07-11 15:37:35.677877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.341 ms 00:29:22.165 [2024-07-11 15:37:35.677888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.165 [2024-07-11 15:37:35.678019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.165 [2024-07-11 15:37:35.678070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:22.165 [2024-07-11 15:37:35.678087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:29:22.165 [2024-07-11 15:37:35.678098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.165 [2024-07-11 15:37:35.711282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.165 [2024-07-11 15:37:35.711342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:22.165 [2024-07-11 15:37:35.711389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.076 ms 00:29:22.166 [2024-07-11 15:37:35.711400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.166 [2024-07-11 15:37:35.711471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.166 [2024-07-11 15:37:35.711487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:22.166 [2024-07-11 15:37:35.711499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:22.166 [2024-07-11 15:37:35.711510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.166 [2024-07-11 15:37:35.711928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.166 [2024-07-11 15:37:35.711947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:22.166 [2024-07-11 15:37:35.711960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:29:22.166 [2024-07-11 15:37:35.711972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.166 [2024-07-11 15:37:35.712197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.166 [2024-07-11 15:37:35.712216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:22.166 [2024-07-11 15:37:35.712228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:29:22.166 [2024-07-11 15:37:35.712238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.166 [2024-07-11 15:37:35.726358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.166 [2024-07-11 15:37:35.726412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:22.166 [2024-07-11 15:37:35.726458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.094 ms 00:29:22.166 [2024-07-11 15:37:35.726469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.166 [2024-07-11 15:37:35.740398] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:22.166 [2024-07-11 15:37:35.740458] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:22.166 [2024-07-11 15:37:35.740494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.166 [2024-07-11 15:37:35.740505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:22.166 [2024-07-11 15:37:35.740516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.886 ms 00:29:22.166 [2024-07-11 15:37:35.740525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.166 [2024-07-11 15:37:35.765547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.166 [2024-07-11 15:37:35.765598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:22.166 [2024-07-11 15:37:35.765629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.981 ms 00:29:22.166 [2024-07-11 15:37:35.765639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.425 [2024-07-11 15:37:35.779646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.425 [2024-07-11 15:37:35.779696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:22.425 [2024-07-11 15:37:35.779726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.954 ms 00:29:22.425 [2024-07-11 15:37:35.779735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.425 [2024-07-11 15:37:35.793335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.425 [2024-07-11 15:37:35.793386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:22.425 [2024-07-11 15:37:35.793416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.561 ms 00:29:22.425 [2024-07-11 15:37:35.793425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.425 [2024-07-11 15:37:35.794222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.425 [2024-07-11 15:37:35.794261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:22.425 [2024-07-11 15:37:35.794277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:29:22.425 [2024-07-11 15:37:35.794289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.425 [2024-07-11 15:37:35.860351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.425 [2024-07-11 15:37:35.860427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:22.425 [2024-07-11 15:37:35.860478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.022 ms 00:29:22.425 [2024-07-11 15:37:35.860488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.425 [2024-07-11 15:37:35.871763] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:22.425 [2024-07-11 15:37:35.874274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.425 [2024-07-11 15:37:35.874313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:22.425 [2024-07-11 15:37:35.874346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.728 ms 00:29:22.425 [2024-07-11 15:37:35.874372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.425 [2024-07-11 15:37:35.874499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.425 [2024-07-11 15:37:35.874516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:22.425 [2024-07-11 15:37:35.874528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:22.425 [2024-07-11 15:37:35.874539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.425 [2024-07-11 15:37:35.874636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.425 [2024-07-11 15:37:35.874653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:22.425 [2024-07-11 15:37:35.874670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:22.425 [2024-07-11 15:37:35.874680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.425 [2024-07-11 15:37:35.874709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.425 [2024-07-11 15:37:35.874721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:22.425 [2024-07-11 15:37:35.874732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:22.425 [2024-07-11 15:37:35.874742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.425 [2024-07-11 15:37:35.874777] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:22.425 [2024-07-11 15:37:35.874792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.425 [2024-07-11 15:37:35.874802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:22.425 [2024-07-11 15:37:35.874812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:29:22.425 [2024-07-11 15:37:35.874825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.425 [2024-07-11 15:37:35.900935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.425 [2024-07-11 15:37:35.900987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:22.425 [2024-07-11 15:37:35.901018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.088 ms 00:29:22.425 [2024-07-11 15:37:35.901028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.425 [2024-07-11 15:37:35.901107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:22.425 [2024-07-11 15:37:35.901124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:22.425 [2024-07-11 15:37:35.901142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:22.425 [2024-07-11 15:37:35.901152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:22.425 [2024-07-11 15:37:35.902564] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 289.020 ms, result 0 00:30:06.002  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 71/1024 [MB] (23 MBps) Copying: 94/1024 [MB] (23 MBps) Copying: 117/1024 [MB] (23 MBps) Copying: 141/1024 [MB] (23 MBps) Copying: 164/1024 [MB] (23 MBps) Copying: 187/1024 [MB] (23 MBps) Copying: 211/1024 [MB] (23 MBps) Copying: 235/1024 [MB] (23 MBps) Copying: 258/1024 [MB] (23 MBps) Copying: 281/1024 [MB] (23 MBps) Copying: 305/1024 [MB] (23 MBps) Copying: 328/1024 [MB] (23 MBps) Copying: 352/1024 [MB] (23 MBps) Copying: 375/1024 [MB] (23 MBps) Copying: 399/1024 [MB] (23 MBps) Copying: 422/1024 [MB] (23 MBps) Copying: 445/1024 [MB] (23 MBps) Copying: 468/1024 [MB] (23 MBps) Copying: 491/1024 [MB] (23 MBps) Copying: 515/1024 [MB] (23 MBps) Copying: 539/1024 [MB] (23 MBps) Copying: 562/1024 [MB] (23 MBps) Copying: 586/1024 [MB] (23 MBps) Copying: 610/1024 [MB] (24 MBps) Copying: 634/1024 [MB] (24 MBps) Copying: 657/1024 [MB] (23 MBps) Copying: 681/1024 [MB] (23 MBps) Copying: 704/1024 [MB] (23 MBps) Copying: 728/1024 [MB] (23 MBps) Copying: 751/1024 [MB] (23 MBps) Copying: 775/1024 [MB] (23 MBps) Copying: 800/1024 [MB] (25 MBps) Copying: 824/1024 [MB] (24 MBps) Copying: 847/1024 [MB] (23 MBps) Copying: 871/1024 [MB] (23 MBps) Copying: 894/1024 [MB] (23 MBps) Copying: 917/1024 [MB] (22 MBps) Copying: 941/1024 [MB] (23 MBps) Copying: 964/1024 [MB] (22 MBps) Copying: 986/1024 [MB] (22 MBps) Copying: 1010/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-11 15:38:19.481640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.002 [2024-07-11 15:38:19.481711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:06.002 [2024-07-11 15:38:19.481747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:06.002 [2024-07-11 15:38:19.481758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.002 [2024-07-11 15:38:19.481784] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:06.002 [2024-07-11 15:38:19.485109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.002 [2024-07-11 15:38:19.485160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:06.002 [2024-07-11 15:38:19.485176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.304 ms 00:30:06.002 [2024-07-11 15:38:19.485187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.002 [2024-07-11 15:38:19.487218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.002 [2024-07-11 15:38:19.487319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:06.002 [2024-07-11 15:38:19.487341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.991 ms 00:30:06.002 [2024-07-11 15:38:19.487352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.002 [2024-07-11 15:38:19.487396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.002 [2024-07-11 15:38:19.487410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:06.002 [2024-07-11 15:38:19.487422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:06.002 [2024-07-11 15:38:19.487431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.002 [2024-07-11 15:38:19.487479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.002 [2024-07-11 15:38:19.487493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:06.002 [2024-07-11 15:38:19.487504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:06.002 [2024-07-11 15:38:19.487517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.002 [2024-07-11 15:38:19.487550] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:06.002 [2024-07-11 15:38:19.487581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.487993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.488004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.488014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.488039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.488051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.488061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.488072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.488099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.488112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.488122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.488133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.488144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.488154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.488165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.488175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:06.002 [2024-07-11 15:38:19.488186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:06.003 [2024-07-11 15:38:19.488707] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:06.003 [2024-07-11 15:38:19.488718] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c8a1d95c-c33b-4de9-9bd6-bd82ad45b6a1 00:30:06.003 [2024-07-11 15:38:19.488729] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:06.003 [2024-07-11 15:38:19.488739] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:06.003 [2024-07-11 15:38:19.488748] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:06.003 [2024-07-11 15:38:19.488759] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:06.003 [2024-07-11 15:38:19.488768] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:06.003 [2024-07-11 15:38:19.488779] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:06.003 [2024-07-11 15:38:19.488794] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:06.003 [2024-07-11 15:38:19.488804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:06.003 [2024-07-11 15:38:19.488813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:06.003 [2024-07-11 15:38:19.488823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.003 [2024-07-11 15:38:19.488834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:06.003 [2024-07-11 15:38:19.488846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.290 ms 00:30:06.003 [2024-07-11 15:38:19.488856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.003 [2024-07-11 15:38:19.502955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.003 [2024-07-11 15:38:19.503005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:06.003 [2024-07-11 15:38:19.503051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.079 ms 00:30:06.003 [2024-07-11 15:38:19.503080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.003 [2024-07-11 15:38:19.503522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:06.003 [2024-07-11 15:38:19.503552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:06.003 [2024-07-11 15:38:19.503566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:30:06.003 [2024-07-11 15:38:19.503576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.003 [2024-07-11 15:38:19.534976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.003 [2024-07-11 15:38:19.535057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:06.003 [2024-07-11 15:38:19.535073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.003 [2024-07-11 15:38:19.535088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.003 [2024-07-11 15:38:19.535145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.003 [2024-07-11 15:38:19.535159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:06.003 [2024-07-11 15:38:19.535170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.003 [2024-07-11 15:38:19.535179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.003 [2024-07-11 15:38:19.535238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.003 [2024-07-11 15:38:19.535272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:06.003 [2024-07-11 15:38:19.535300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.003 [2024-07-11 15:38:19.535326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.003 [2024-07-11 15:38:19.535352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.003 [2024-07-11 15:38:19.535365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:06.003 [2024-07-11 15:38:19.535376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.003 [2024-07-11 15:38:19.535403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.262 [2024-07-11 15:38:19.617590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.262 [2024-07-11 15:38:19.617667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:06.262 [2024-07-11 15:38:19.617700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.262 [2024-07-11 15:38:19.617710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.262 [2024-07-11 15:38:19.692475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.262 [2024-07-11 15:38:19.692547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:06.262 [2024-07-11 15:38:19.692580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.262 [2024-07-11 15:38:19.692590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.262 [2024-07-11 15:38:19.692687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.262 [2024-07-11 15:38:19.692704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:06.262 [2024-07-11 15:38:19.692716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.262 [2024-07-11 15:38:19.692727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.262 [2024-07-11 15:38:19.692785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.262 [2024-07-11 15:38:19.692821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:06.262 [2024-07-11 15:38:19.692833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.262 [2024-07-11 15:38:19.692844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.262 [2024-07-11 15:38:19.692954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.262 [2024-07-11 15:38:19.692984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:06.262 [2024-07-11 15:38:19.692998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.262 [2024-07-11 15:38:19.693009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.262 [2024-07-11 15:38:19.693072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.262 [2024-07-11 15:38:19.693093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:06.262 [2024-07-11 15:38:19.693127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.262 [2024-07-11 15:38:19.693138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.262 [2024-07-11 15:38:19.693180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.262 [2024-07-11 15:38:19.693194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:06.262 [2024-07-11 15:38:19.693206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.262 [2024-07-11 15:38:19.693216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.262 [2024-07-11 15:38:19.693279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:06.262 [2024-07-11 15:38:19.693311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:06.262 [2024-07-11 15:38:19.693323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:06.262 [2024-07-11 15:38:19.693334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:06.262 [2024-07-11 15:38:19.693478] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 211.819 ms, result 0 00:30:07.208 00:30:07.208 00:30:07.208 15:38:20 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:30:07.467 [2024-07-11 15:38:20.872381] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:07.467 [2024-07-11 15:38:20.872563] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87183 ] 00:30:07.467 [2024-07-11 15:38:21.039344] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.727 [2024-07-11 15:38:21.209619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.985 [2024-07-11 15:38:21.542842] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:07.985 [2024-07-11 15:38:21.542957] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:08.246 [2024-07-11 15:38:21.706055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.246 [2024-07-11 15:38:21.706111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:08.246 [2024-07-11 15:38:21.706132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:08.246 [2024-07-11 15:38:21.706145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.246 [2024-07-11 15:38:21.706224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.246 [2024-07-11 15:38:21.706245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:08.246 [2024-07-11 15:38:21.706259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:30:08.246 [2024-07-11 15:38:21.706274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.246 [2024-07-11 15:38:21.706306] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:08.246 [2024-07-11 15:38:21.707226] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:08.246 [2024-07-11 15:38:21.707269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.246 [2024-07-11 15:38:21.707289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:08.246 [2024-07-11 15:38:21.707302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:30:08.246 [2024-07-11 15:38:21.707315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.246 [2024-07-11 15:38:21.707783] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:08.246 [2024-07-11 15:38:21.707826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.246 [2024-07-11 15:38:21.707840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:08.246 [2024-07-11 15:38:21.707854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:30:08.246 [2024-07-11 15:38:21.707873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.246 [2024-07-11 15:38:21.707930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.246 [2024-07-11 15:38:21.707947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:08.246 [2024-07-11 15:38:21.707959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:30:08.246 [2024-07-11 15:38:21.707970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.246 [2024-07-11 15:38:21.708414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.246 [2024-07-11 15:38:21.708445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:08.246 [2024-07-11 15:38:21.708460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:30:08.246 [2024-07-11 15:38:21.708477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.246 [2024-07-11 15:38:21.708589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.246 [2024-07-11 15:38:21.708608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:08.246 [2024-07-11 15:38:21.708621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:30:08.246 [2024-07-11 15:38:21.708632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.246 [2024-07-11 15:38:21.708668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.246 [2024-07-11 15:38:21.708685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:08.246 [2024-07-11 15:38:21.708697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:08.246 [2024-07-11 15:38:21.708708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.246 [2024-07-11 15:38:21.708743] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:08.246 [2024-07-11 15:38:21.713638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.246 [2024-07-11 15:38:21.713694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:08.246 [2024-07-11 15:38:21.713716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.901 ms 00:30:08.246 [2024-07-11 15:38:21.713728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.246 [2024-07-11 15:38:21.713772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.246 [2024-07-11 15:38:21.713789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:08.246 [2024-07-11 15:38:21.713802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:08.246 [2024-07-11 15:38:21.713813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.246 [2024-07-11 15:38:21.713878] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:08.246 [2024-07-11 15:38:21.713909] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:08.246 [2024-07-11 15:38:21.713952] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:08.246 [2024-07-11 15:38:21.713977] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:30:08.246 [2024-07-11 15:38:21.714108] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:08.246 [2024-07-11 15:38:21.714129] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:08.246 [2024-07-11 15:38:21.714143] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:30:08.246 [2024-07-11 15:38:21.714159] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:08.246 [2024-07-11 15:38:21.714173] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:08.246 [2024-07-11 15:38:21.714185] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:08.246 [2024-07-11 15:38:21.714196] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:08.246 [2024-07-11 15:38:21.714208] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:08.246 [2024-07-11 15:38:21.714225] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:08.246 [2024-07-11 15:38:21.714237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.246 [2024-07-11 15:38:21.714248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:08.246 [2024-07-11 15:38:21.714261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:30:08.246 [2024-07-11 15:38:21.714273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.246 [2024-07-11 15:38:21.714369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.246 [2024-07-11 15:38:21.714386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:08.246 [2024-07-11 15:38:21.714398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:30:08.246 [2024-07-11 15:38:21.714410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.246 [2024-07-11 15:38:21.714518] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:08.246 [2024-07-11 15:38:21.714536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:08.246 [2024-07-11 15:38:21.714550] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:08.246 [2024-07-11 15:38:21.714562] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.246 [2024-07-11 15:38:21.714574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:08.246 [2024-07-11 15:38:21.714584] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:08.246 [2024-07-11 15:38:21.714595] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:08.246 [2024-07-11 15:38:21.714606] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:08.246 [2024-07-11 15:38:21.714617] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:08.246 [2024-07-11 15:38:21.714627] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:08.246 [2024-07-11 15:38:21.714638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:08.246 [2024-07-11 15:38:21.714648] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:08.246 [2024-07-11 15:38:21.714658] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:08.246 [2024-07-11 15:38:21.714669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:08.246 [2024-07-11 15:38:21.714681] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:08.246 [2024-07-11 15:38:21.714691] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.246 [2024-07-11 15:38:21.714702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:08.246 [2024-07-11 15:38:21.714712] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:08.246 [2024-07-11 15:38:21.714722] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.246 [2024-07-11 15:38:21.714733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:08.246 [2024-07-11 15:38:21.714743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:08.246 [2024-07-11 15:38:21.714754] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.246 [2024-07-11 15:38:21.714778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:08.246 [2024-07-11 15:38:21.714789] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:08.246 [2024-07-11 15:38:21.714800] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.246 [2024-07-11 15:38:21.714810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:08.246 [2024-07-11 15:38:21.714821] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:08.246 [2024-07-11 15:38:21.714831] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.246 [2024-07-11 15:38:21.714841] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:08.246 [2024-07-11 15:38:21.714852] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:08.247 [2024-07-11 15:38:21.714862] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.247 [2024-07-11 15:38:21.714872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:08.247 [2024-07-11 15:38:21.714882] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:08.247 [2024-07-11 15:38:21.714892] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:08.247 [2024-07-11 15:38:21.714902] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:08.247 [2024-07-11 15:38:21.714912] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:08.247 [2024-07-11 15:38:21.714922] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:08.247 [2024-07-11 15:38:21.714933] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:08.247 [2024-07-11 15:38:21.714943] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:08.247 [2024-07-11 15:38:21.714953] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.247 [2024-07-11 15:38:21.714963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:08.247 [2024-07-11 15:38:21.714974] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:08.247 [2024-07-11 15:38:21.714985] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.247 [2024-07-11 15:38:21.714994] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:08.247 [2024-07-11 15:38:21.715006] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:08.247 [2024-07-11 15:38:21.715032] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:08.247 [2024-07-11 15:38:21.715048] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.247 [2024-07-11 15:38:21.715060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:08.247 [2024-07-11 15:38:21.715071] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:08.247 [2024-07-11 15:38:21.715081] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:08.247 [2024-07-11 15:38:21.715092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:08.247 [2024-07-11 15:38:21.715102] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:08.247 [2024-07-11 15:38:21.715113] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:08.247 [2024-07-11 15:38:21.715124] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:08.247 [2024-07-11 15:38:21.715138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:08.247 [2024-07-11 15:38:21.715150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:08.247 [2024-07-11 15:38:21.715162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:08.247 [2024-07-11 15:38:21.715173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:08.247 [2024-07-11 15:38:21.715185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:08.247 [2024-07-11 15:38:21.715196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:08.247 [2024-07-11 15:38:21.715208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:08.247 [2024-07-11 15:38:21.715219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:08.247 [2024-07-11 15:38:21.715230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:08.247 [2024-07-11 15:38:21.715242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:08.247 [2024-07-11 15:38:21.715253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:08.247 [2024-07-11 15:38:21.715264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:08.247 [2024-07-11 15:38:21.715275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:08.247 [2024-07-11 15:38:21.715287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:08.247 [2024-07-11 15:38:21.715298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:08.247 [2024-07-11 15:38:21.715310] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:08.247 [2024-07-11 15:38:21.715328] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:08.247 [2024-07-11 15:38:21.715341] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:08.247 [2024-07-11 15:38:21.715353] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:08.247 [2024-07-11 15:38:21.715365] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:08.247 [2024-07-11 15:38:21.715377] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:08.247 [2024-07-11 15:38:21.715389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.247 [2024-07-11 15:38:21.715401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:08.247 [2024-07-11 15:38:21.715413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:30:08.247 [2024-07-11 15:38:21.715425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.247 [2024-07-11 15:38:21.754765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.247 [2024-07-11 15:38:21.754885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:08.247 [2024-07-11 15:38:21.754907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.281 ms 00:30:08.247 [2024-07-11 15:38:21.754919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.247 [2024-07-11 15:38:21.755063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.247 [2024-07-11 15:38:21.755082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:08.247 [2024-07-11 15:38:21.755095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:30:08.247 [2024-07-11 15:38:21.755106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.247 [2024-07-11 15:38:21.797459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.247 [2024-07-11 15:38:21.797550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:08.247 [2024-07-11 15:38:21.797583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.238 ms 00:30:08.247 [2024-07-11 15:38:21.797595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.247 [2024-07-11 15:38:21.797658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.247 [2024-07-11 15:38:21.797674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:08.247 [2024-07-11 15:38:21.797694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:08.247 [2024-07-11 15:38:21.797705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.247 [2024-07-11 15:38:21.797871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.247 [2024-07-11 15:38:21.797891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:08.247 [2024-07-11 15:38:21.797904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:30:08.247 [2024-07-11 15:38:21.797915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.247 [2024-07-11 15:38:21.798096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.247 [2024-07-11 15:38:21.798127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:08.247 [2024-07-11 15:38:21.798149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:30:08.247 [2024-07-11 15:38:21.798166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.247 [2024-07-11 15:38:21.815931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.247 [2024-07-11 15:38:21.815973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:08.247 [2024-07-11 15:38:21.816011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.736 ms 00:30:08.247 [2024-07-11 15:38:21.816023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.247 [2024-07-11 15:38:21.816202] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:08.247 [2024-07-11 15:38:21.816227] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:08.247 [2024-07-11 15:38:21.816243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.247 [2024-07-11 15:38:21.816255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:08.247 [2024-07-11 15:38:21.816268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:30:08.247 [2024-07-11 15:38:21.816279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.247 [2024-07-11 15:38:21.828788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.247 [2024-07-11 15:38:21.828855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:08.247 [2024-07-11 15:38:21.828885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.481 ms 00:30:08.247 [2024-07-11 15:38:21.828896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.247 [2024-07-11 15:38:21.829013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.247 [2024-07-11 15:38:21.829028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:08.247 [2024-07-11 15:38:21.829053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:30:08.247 [2024-07-11 15:38:21.829064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.247 [2024-07-11 15:38:21.829158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.247 [2024-07-11 15:38:21.829175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:08.247 [2024-07-11 15:38:21.829194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:08.247 [2024-07-11 15:38:21.829205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.247 [2024-07-11 15:38:21.829888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.247 [2024-07-11 15:38:21.829931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:08.247 [2024-07-11 15:38:21.829946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:30:08.247 [2024-07-11 15:38:21.829956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.247 [2024-07-11 15:38:21.829978] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:08.247 [2024-07-11 15:38:21.830028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.247 [2024-07-11 15:38:21.830083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:08.247 [2024-07-11 15:38:21.830102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:30:08.247 [2024-07-11 15:38:21.830113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.247 [2024-07-11 15:38:21.841583] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:08.247 [2024-07-11 15:38:21.841816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.247 [2024-07-11 15:38:21.841835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:08.247 [2024-07-11 15:38:21.841848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.673 ms 00:30:08.247 [2024-07-11 15:38:21.841890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.247 [2024-07-11 15:38:21.843961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.248 [2024-07-11 15:38:21.844005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:08.248 [2024-07-11 15:38:21.844044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.021 ms 00:30:08.248 [2024-07-11 15:38:21.844061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.248 [2024-07-11 15:38:21.844158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.248 [2024-07-11 15:38:21.844176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:08.248 [2024-07-11 15:38:21.844187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:08.248 [2024-07-11 15:38:21.844197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.248 [2024-07-11 15:38:21.844241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.248 [2024-07-11 15:38:21.844287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:08.248 [2024-07-11 15:38:21.844299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:08.248 [2024-07-11 15:38:21.844310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.248 [2024-07-11 15:38:21.844349] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:08.248 [2024-07-11 15:38:21.844365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.248 [2024-07-11 15:38:21.844376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:08.248 [2024-07-11 15:38:21.844388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:30:08.248 [2024-07-11 15:38:21.844398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.507 [2024-07-11 15:38:21.871660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.507 [2024-07-11 15:38:21.871721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:08.507 [2024-07-11 15:38:21.871754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.235 ms 00:30:08.507 [2024-07-11 15:38:21.871772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.507 [2024-07-11 15:38:21.871847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.507 [2024-07-11 15:38:21.871863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:08.507 [2024-07-11 15:38:21.871876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:08.507 [2024-07-11 15:38:21.871885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.507 [2024-07-11 15:38:21.873280] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 166.639 ms, result 0 00:30:53.451  Copying: 21/1024 [MB] (21 MBps) Copying: 43/1024 [MB] (22 MBps) Copying: 67/1024 [MB] (23 MBps) Copying: 90/1024 [MB] (22 MBps) Copying: 111/1024 [MB] (21 MBps) Copying: 134/1024 [MB] (22 MBps) Copying: 156/1024 [MB] (22 MBps) Copying: 178/1024 [MB] (22 MBps) Copying: 202/1024 [MB] (23 MBps) Copying: 225/1024 [MB] (23 MBps) Copying: 248/1024 [MB] (23 MBps) Copying: 271/1024 [MB] (23 MBps) Copying: 293/1024 [MB] (22 MBps) Copying: 316/1024 [MB] (22 MBps) Copying: 338/1024 [MB] (22 MBps) Copying: 361/1024 [MB] (22 MBps) Copying: 383/1024 [MB] (22 MBps) Copying: 406/1024 [MB] (22 MBps) Copying: 429/1024 [MB] (22 MBps) Copying: 452/1024 [MB] (22 MBps) Copying: 475/1024 [MB] (23 MBps) Copying: 498/1024 [MB] (22 MBps) Copying: 521/1024 [MB] (23 MBps) Copying: 544/1024 [MB] (22 MBps) Copying: 567/1024 [MB] (22 MBps) Copying: 591/1024 [MB] (23 MBps) Copying: 614/1024 [MB] (23 MBps) Copying: 636/1024 [MB] (22 MBps) Copying: 660/1024 [MB] (23 MBps) Copying: 683/1024 [MB] (23 MBps) Copying: 706/1024 [MB] (23 MBps) Copying: 729/1024 [MB] (22 MBps) Copying: 752/1024 [MB] (23 MBps) Copying: 776/1024 [MB] (23 MBps) Copying: 799/1024 [MB] (23 MBps) Copying: 823/1024 [MB] (23 MBps) Copying: 846/1024 [MB] (23 MBps) Copying: 870/1024 [MB] (23 MBps) Copying: 893/1024 [MB] (22 MBps) Copying: 915/1024 [MB] (22 MBps) Copying: 938/1024 [MB] (23 MBps) Copying: 962/1024 [MB] (23 MBps) Copying: 984/1024 [MB] (22 MBps) Copying: 1007/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-07-11 15:39:06.816463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.451 [2024-07-11 15:39:06.816560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:53.451 [2024-07-11 15:39:06.816579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:53.451 [2024-07-11 15:39:06.816590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.451 [2024-07-11 15:39:06.816873] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:53.451 [2024-07-11 15:39:06.819726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.451 [2024-07-11 15:39:06.819753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:53.451 [2024-07-11 15:39:06.819772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.815 ms 00:30:53.451 [2024-07-11 15:39:06.819781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.451 [2024-07-11 15:39:06.819992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.451 [2024-07-11 15:39:06.820022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:53.451 [2024-07-11 15:39:06.820033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:30:53.451 [2024-07-11 15:39:06.820042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.451 [2024-07-11 15:39:06.820084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.451 [2024-07-11 15:39:06.820097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:53.451 [2024-07-11 15:39:06.820108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:53.451 [2024-07-11 15:39:06.820123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.451 [2024-07-11 15:39:06.820173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.451 [2024-07-11 15:39:06.820186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:53.451 [2024-07-11 15:39:06.820196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:30:53.451 [2024-07-11 15:39:06.820206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.451 [2024-07-11 15:39:06.820221] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:53.451 [2024-07-11 15:39:06.820237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:53.451 [2024-07-11 15:39:06.820249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:53.451 [2024-07-11 15:39:06.820258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:53.451 [2024-07-11 15:39:06.820268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:53.451 [2024-07-11 15:39:06.820278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:53.451 [2024-07-11 15:39:06.820288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:53.451 [2024-07-11 15:39:06.820297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:53.451 [2024-07-11 15:39:06.820307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:53.451 [2024-07-11 15:39:06.820317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:53.451 [2024-07-11 15:39:06.820326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:53.451 [2024-07-11 15:39:06.820336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.820993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:53.452 [2024-07-11 15:39:06.821276] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:53.452 [2024-07-11 15:39:06.821286] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c8a1d95c-c33b-4de9-9bd6-bd82ad45b6a1 00:30:53.452 [2024-07-11 15:39:06.821296] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:53.452 [2024-07-11 15:39:06.821304] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:53.452 [2024-07-11 15:39:06.821317] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:53.452 [2024-07-11 15:39:06.821327] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:53.452 [2024-07-11 15:39:06.821336] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:53.452 [2024-07-11 15:39:06.821346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:53.453 [2024-07-11 15:39:06.821355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:53.453 [2024-07-11 15:39:06.821364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:53.453 [2024-07-11 15:39:06.821373] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:53.453 [2024-07-11 15:39:06.821381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.453 [2024-07-11 15:39:06.821393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:53.453 [2024-07-11 15:39:06.821404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.161 ms 00:30:53.453 [2024-07-11 15:39:06.821413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.453 [2024-07-11 15:39:06.835397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.453 [2024-07-11 15:39:06.835442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:53.453 [2024-07-11 15:39:06.835455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.965 ms 00:30:53.453 [2024-07-11 15:39:06.835465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.453 [2024-07-11 15:39:06.835804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.453 [2024-07-11 15:39:06.835837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:53.453 [2024-07-11 15:39:06.835849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:30:53.453 [2024-07-11 15:39:06.835858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.453 [2024-07-11 15:39:06.865134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.453 [2024-07-11 15:39:06.865187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:53.453 [2024-07-11 15:39:06.865201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.453 [2024-07-11 15:39:06.865211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.453 [2024-07-11 15:39:06.865266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.453 [2024-07-11 15:39:06.865279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:53.453 [2024-07-11 15:39:06.865290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.453 [2024-07-11 15:39:06.865299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.453 [2024-07-11 15:39:06.865359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.453 [2024-07-11 15:39:06.865397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:53.453 [2024-07-11 15:39:06.865408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.453 [2024-07-11 15:39:06.865417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.453 [2024-07-11 15:39:06.865435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.453 [2024-07-11 15:39:06.865447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:53.453 [2024-07-11 15:39:06.865457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.453 [2024-07-11 15:39:06.865465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.453 [2024-07-11 15:39:06.947068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.453 [2024-07-11 15:39:06.947139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:53.453 [2024-07-11 15:39:06.947155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.453 [2024-07-11 15:39:06.947165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.453 [2024-07-11 15:39:07.013102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.453 [2024-07-11 15:39:07.013162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:53.453 [2024-07-11 15:39:07.013178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.453 [2024-07-11 15:39:07.013188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.453 [2024-07-11 15:39:07.013254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.453 [2024-07-11 15:39:07.013268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:53.453 [2024-07-11 15:39:07.013285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.453 [2024-07-11 15:39:07.013295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.453 [2024-07-11 15:39:07.013333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.453 [2024-07-11 15:39:07.013346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:53.453 [2024-07-11 15:39:07.013356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.453 [2024-07-11 15:39:07.013365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.453 [2024-07-11 15:39:07.013444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.453 [2024-07-11 15:39:07.013462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:53.453 [2024-07-11 15:39:07.013478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.453 [2024-07-11 15:39:07.013500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.453 [2024-07-11 15:39:07.013533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.453 [2024-07-11 15:39:07.013547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:53.453 [2024-07-11 15:39:07.013558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.453 [2024-07-11 15:39:07.013572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.453 [2024-07-11 15:39:07.013610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.453 [2024-07-11 15:39:07.013622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:53.453 [2024-07-11 15:39:07.013632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.453 [2024-07-11 15:39:07.013645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.453 [2024-07-11 15:39:07.013689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:53.453 [2024-07-11 15:39:07.013703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:53.453 [2024-07-11 15:39:07.013713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:53.453 [2024-07-11 15:39:07.013722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.453 [2024-07-11 15:39:07.013839] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 197.346 ms, result 0 00:30:54.391 00:30:54.391 00:30:54.391 15:39:07 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:56.300 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:56.300 15:39:09 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:30:56.300 [2024-07-11 15:39:09.618554] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:56.300 [2024-07-11 15:39:09.618693] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87643 ] 00:30:56.300 [2024-07-11 15:39:09.764101] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.300 [2024-07-11 15:39:09.913177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.560 [2024-07-11 15:39:10.162998] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:56.560 [2024-07-11 15:39:10.163109] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:56.821 [2024-07-11 15:39:10.317941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.821 [2024-07-11 15:39:10.318006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:56.821 [2024-07-11 15:39:10.318051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:56.821 [2024-07-11 15:39:10.318063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.821 [2024-07-11 15:39:10.318126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.821 [2024-07-11 15:39:10.318155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:56.821 [2024-07-11 15:39:10.318167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:30:56.821 [2024-07-11 15:39:10.318181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.821 [2024-07-11 15:39:10.318208] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:56.821 [2024-07-11 15:39:10.319112] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:56.821 [2024-07-11 15:39:10.319179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.821 [2024-07-11 15:39:10.319195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:56.821 [2024-07-11 15:39:10.319205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:30:56.821 [2024-07-11 15:39:10.319215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.821 [2024-07-11 15:39:10.319611] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:56.821 [2024-07-11 15:39:10.319649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.821 [2024-07-11 15:39:10.319662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:56.821 [2024-07-11 15:39:10.319673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:30:56.821 [2024-07-11 15:39:10.319688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.821 [2024-07-11 15:39:10.319754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.821 [2024-07-11 15:39:10.319768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:56.821 [2024-07-11 15:39:10.319778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:30:56.821 [2024-07-11 15:39:10.319787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.821 [2024-07-11 15:39:10.320190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.821 [2024-07-11 15:39:10.320217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:56.821 [2024-07-11 15:39:10.320230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:30:56.821 [2024-07-11 15:39:10.320244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.821 [2024-07-11 15:39:10.320328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.821 [2024-07-11 15:39:10.320344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:56.821 [2024-07-11 15:39:10.320370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:30:56.821 [2024-07-11 15:39:10.320380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.821 [2024-07-11 15:39:10.320412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.821 [2024-07-11 15:39:10.320427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:56.821 [2024-07-11 15:39:10.320438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:56.821 [2024-07-11 15:39:10.320447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.821 [2024-07-11 15:39:10.320476] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:56.821 [2024-07-11 15:39:10.324328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.821 [2024-07-11 15:39:10.324361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:56.821 [2024-07-11 15:39:10.324395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.857 ms 00:30:56.821 [2024-07-11 15:39:10.324404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.821 [2024-07-11 15:39:10.324441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.821 [2024-07-11 15:39:10.324455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:56.821 [2024-07-11 15:39:10.324465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:56.821 [2024-07-11 15:39:10.324475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.821 [2024-07-11 15:39:10.324523] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:56.821 [2024-07-11 15:39:10.324551] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:56.821 [2024-07-11 15:39:10.324603] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:56.821 [2024-07-11 15:39:10.324624] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:30:56.821 [2024-07-11 15:39:10.324714] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:56.821 [2024-07-11 15:39:10.324727] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:56.822 [2024-07-11 15:39:10.324740] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:30:56.822 [2024-07-11 15:39:10.324752] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:56.822 [2024-07-11 15:39:10.324765] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:56.822 [2024-07-11 15:39:10.324775] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:56.822 [2024-07-11 15:39:10.324785] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:56.822 [2024-07-11 15:39:10.324794] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:56.822 [2024-07-11 15:39:10.324807] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:56.822 [2024-07-11 15:39:10.324818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.822 [2024-07-11 15:39:10.324827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:56.822 [2024-07-11 15:39:10.324838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:30:56.822 [2024-07-11 15:39:10.324847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.822 [2024-07-11 15:39:10.324923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.822 [2024-07-11 15:39:10.324936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:56.822 [2024-07-11 15:39:10.324946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:30:56.822 [2024-07-11 15:39:10.324955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.822 [2024-07-11 15:39:10.325065] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:56.822 [2024-07-11 15:39:10.325083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:56.822 [2024-07-11 15:39:10.325094] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:56.822 [2024-07-11 15:39:10.325104] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.822 [2024-07-11 15:39:10.325113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:56.822 [2024-07-11 15:39:10.325122] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:56.822 [2024-07-11 15:39:10.325131] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:56.822 [2024-07-11 15:39:10.325140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:56.822 [2024-07-11 15:39:10.325150] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:56.822 [2024-07-11 15:39:10.325159] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:56.822 [2024-07-11 15:39:10.325167] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:56.822 [2024-07-11 15:39:10.325176] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:56.822 [2024-07-11 15:39:10.325185] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:56.822 [2024-07-11 15:39:10.325195] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:56.822 [2024-07-11 15:39:10.325207] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:56.822 [2024-07-11 15:39:10.325229] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.822 [2024-07-11 15:39:10.325250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:56.822 [2024-07-11 15:39:10.325265] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:56.822 [2024-07-11 15:39:10.325281] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.822 [2024-07-11 15:39:10.325297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:56.822 [2024-07-11 15:39:10.325311] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:56.822 [2024-07-11 15:39:10.325325] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:56.822 [2024-07-11 15:39:10.325356] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:56.822 [2024-07-11 15:39:10.325372] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:56.822 [2024-07-11 15:39:10.325386] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:56.822 [2024-07-11 15:39:10.325400] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:56.822 [2024-07-11 15:39:10.325414] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:56.822 [2024-07-11 15:39:10.325428] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:56.822 [2024-07-11 15:39:10.325443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:56.822 [2024-07-11 15:39:10.325458] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:56.822 [2024-07-11 15:39:10.325472] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:56.822 [2024-07-11 15:39:10.325486] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:56.822 [2024-07-11 15:39:10.325500] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:56.822 [2024-07-11 15:39:10.325515] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:56.822 [2024-07-11 15:39:10.325529] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:56.822 [2024-07-11 15:39:10.325545] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:56.822 [2024-07-11 15:39:10.325558] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:56.822 [2024-07-11 15:39:10.325572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:56.822 [2024-07-11 15:39:10.325586] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:56.822 [2024-07-11 15:39:10.325601] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.822 [2024-07-11 15:39:10.325616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:56.822 [2024-07-11 15:39:10.325631] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:56.822 [2024-07-11 15:39:10.325645] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.822 [2024-07-11 15:39:10.325659] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:56.822 [2024-07-11 15:39:10.325675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:56.822 [2024-07-11 15:39:10.325694] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:56.822 [2024-07-11 15:39:10.325710] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.822 [2024-07-11 15:39:10.325725] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:56.822 [2024-07-11 15:39:10.325741] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:56.822 [2024-07-11 15:39:10.325755] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:56.822 [2024-07-11 15:39:10.325770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:56.822 [2024-07-11 15:39:10.325784] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:56.822 [2024-07-11 15:39:10.325799] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:56.822 [2024-07-11 15:39:10.325814] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:56.822 [2024-07-11 15:39:10.325833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:56.822 [2024-07-11 15:39:10.325850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:56.822 [2024-07-11 15:39:10.325866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:56.822 [2024-07-11 15:39:10.325882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:56.822 [2024-07-11 15:39:10.325898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:56.822 [2024-07-11 15:39:10.325914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:56.822 [2024-07-11 15:39:10.325930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:56.822 [2024-07-11 15:39:10.325946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:56.822 [2024-07-11 15:39:10.325962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:56.822 [2024-07-11 15:39:10.325988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:56.822 [2024-07-11 15:39:10.326023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:56.822 [2024-07-11 15:39:10.326059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:56.822 [2024-07-11 15:39:10.326078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:56.822 [2024-07-11 15:39:10.326095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:56.822 [2024-07-11 15:39:10.326112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:56.822 [2024-07-11 15:39:10.326128] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:56.822 [2024-07-11 15:39:10.326152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:56.822 [2024-07-11 15:39:10.326169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:56.822 [2024-07-11 15:39:10.326186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:56.822 [2024-07-11 15:39:10.326202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:56.822 [2024-07-11 15:39:10.326219] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:56.822 [2024-07-11 15:39:10.326236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.822 [2024-07-11 15:39:10.326252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:56.822 [2024-07-11 15:39:10.326270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.243 ms 00:30:56.822 [2024-07-11 15:39:10.326285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.822 [2024-07-11 15:39:10.363942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.822 [2024-07-11 15:39:10.363987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:56.822 [2024-07-11 15:39:10.364020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.586 ms 00:30:56.822 [2024-07-11 15:39:10.364046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.822 [2024-07-11 15:39:10.364149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.822 [2024-07-11 15:39:10.364163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:56.822 [2024-07-11 15:39:10.364175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:30:56.822 [2024-07-11 15:39:10.364184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.822 [2024-07-11 15:39:10.393805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.822 [2024-07-11 15:39:10.393847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:56.822 [2024-07-11 15:39:10.393878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.550 ms 00:30:56.822 [2024-07-11 15:39:10.393887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.822 [2024-07-11 15:39:10.393930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.822 [2024-07-11 15:39:10.393944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:56.823 [2024-07-11 15:39:10.393960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:56.823 [2024-07-11 15:39:10.393969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.823 [2024-07-11 15:39:10.394155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.823 [2024-07-11 15:39:10.394187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:56.823 [2024-07-11 15:39:10.394214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:30:56.823 [2024-07-11 15:39:10.394224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.823 [2024-07-11 15:39:10.394360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.823 [2024-07-11 15:39:10.394379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:56.823 [2024-07-11 15:39:10.394391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:30:56.823 [2024-07-11 15:39:10.394405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.823 [2024-07-11 15:39:10.407211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.823 [2024-07-11 15:39:10.407260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:56.823 [2024-07-11 15:39:10.407294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.781 ms 00:30:56.823 [2024-07-11 15:39:10.407303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.823 [2024-07-11 15:39:10.407459] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:56.823 [2024-07-11 15:39:10.407480] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:56.823 [2024-07-11 15:39:10.407507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.823 [2024-07-11 15:39:10.407533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:56.823 [2024-07-11 15:39:10.407559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:30:56.823 [2024-07-11 15:39:10.407569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.823 [2024-07-11 15:39:10.418109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.823 [2024-07-11 15:39:10.418136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:56.823 [2024-07-11 15:39:10.418164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.517 ms 00:30:56.823 [2024-07-11 15:39:10.418173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.823 [2024-07-11 15:39:10.418271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.823 [2024-07-11 15:39:10.418284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:56.823 [2024-07-11 15:39:10.418294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:30:56.823 [2024-07-11 15:39:10.418303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.823 [2024-07-11 15:39:10.418350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.823 [2024-07-11 15:39:10.418379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:56.823 [2024-07-11 15:39:10.418427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:30:56.823 [2024-07-11 15:39:10.418436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.823 [2024-07-11 15:39:10.419077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.823 [2024-07-11 15:39:10.419161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:56.823 [2024-07-11 15:39:10.419176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:30:56.823 [2024-07-11 15:39:10.419185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.823 [2024-07-11 15:39:10.419206] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:56.823 [2024-07-11 15:39:10.419219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.823 [2024-07-11 15:39:10.419241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:56.823 [2024-07-11 15:39:10.419255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:30:56.823 [2024-07-11 15:39:10.419265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.823 [2024-07-11 15:39:10.429757] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:56.823 [2024-07-11 15:39:10.430051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.823 [2024-07-11 15:39:10.430086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:56.823 [2024-07-11 15:39:10.430099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.747 ms 00:30:56.823 [2024-07-11 15:39:10.430124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.823 [2024-07-11 15:39:10.432139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.823 [2024-07-11 15:39:10.432180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:56.823 [2024-07-11 15:39:10.432208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.974 ms 00:30:56.823 [2024-07-11 15:39:10.432222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.823 [2024-07-11 15:39:10.432321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.823 [2024-07-11 15:39:10.432337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:56.823 [2024-07-11 15:39:10.432347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:30:56.823 [2024-07-11 15:39:10.432356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.823 [2024-07-11 15:39:10.432382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.823 [2024-07-11 15:39:10.432394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:56.823 [2024-07-11 15:39:10.432418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:56.823 [2024-07-11 15:39:10.432443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.823 [2024-07-11 15:39:10.432493] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:56.823 [2024-07-11 15:39:10.432507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.823 [2024-07-11 15:39:10.432516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:56.823 [2024-07-11 15:39:10.432526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:30:56.823 [2024-07-11 15:39:10.432536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.082 [2024-07-11 15:39:10.458233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.082 [2024-07-11 15:39:10.458286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:57.082 [2024-07-11 15:39:10.458331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.673 ms 00:30:57.082 [2024-07-11 15:39:10.458364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.082 [2024-07-11 15:39:10.458429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.082 [2024-07-11 15:39:10.458445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:57.082 [2024-07-11 15:39:10.458456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:30:57.082 [2024-07-11 15:39:10.458465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.082 [2024-07-11 15:39:10.459910] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 141.458 ms, result 0 00:31:40.640  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (24 MBps) Copying: 71/1024 [MB] (24 MBps) Copying: 95/1024 [MB] (23 MBps) Copying: 119/1024 [MB] (23 MBps) Copying: 143/1024 [MB] (23 MBps) Copying: 166/1024 [MB] (23 MBps) Copying: 189/1024 [MB] (23 MBps) Copying: 213/1024 [MB] (23 MBps) Copying: 237/1024 [MB] (23 MBps) Copying: 261/1024 [MB] (24 MBps) Copying: 286/1024 [MB] (24 MBps) Copying: 310/1024 [MB] (24 MBps) Copying: 335/1024 [MB] (24 MBps) Copying: 360/1024 [MB] (24 MBps) Copying: 384/1024 [MB] (24 MBps) Copying: 408/1024 [MB] (23 MBps) Copying: 432/1024 [MB] (23 MBps) Copying: 456/1024 [MB] (24 MBps) Copying: 480/1024 [MB] (24 MBps) Copying: 504/1024 [MB] (23 MBps) Copying: 529/1024 [MB] (24 MBps) Copying: 552/1024 [MB] (23 MBps) Copying: 577/1024 [MB] (24 MBps) Copying: 601/1024 [MB] (24 MBps) Copying: 625/1024 [MB] (24 MBps) Copying: 649/1024 [MB] (24 MBps) Copying: 673/1024 [MB] (23 MBps) Copying: 697/1024 [MB] (24 MBps) Copying: 721/1024 [MB] (23 MBps) Copying: 745/1024 [MB] (23 MBps) Copying: 769/1024 [MB] (23 MBps) Copying: 793/1024 [MB] (24 MBps) Copying: 817/1024 [MB] (23 MBps) Copying: 841/1024 [MB] (23 MBps) Copying: 865/1024 [MB] (24 MBps) Copying: 889/1024 [MB] (23 MBps) Copying: 912/1024 [MB] (23 MBps) Copying: 936/1024 [MB] (24 MBps) Copying: 960/1024 [MB] (24 MBps) Copying: 984/1024 [MB] (23 MBps) Copying: 1008/1024 [MB] (23 MBps) Copying: 1023/1024 [MB] (14 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-11 15:39:54.203156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.640 [2024-07-11 15:39:54.203217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:40.640 [2024-07-11 15:39:54.203289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:40.640 [2024-07-11 15:39:54.203317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.640 [2024-07-11 15:39:54.206694] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:40.640 [2024-07-11 15:39:54.211055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.640 [2024-07-11 15:39:54.211116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:40.640 [2024-07-11 15:39:54.211131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.274 ms 00:31:40.640 [2024-07-11 15:39:54.211147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.640 [2024-07-11 15:39:54.221292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.640 [2024-07-11 15:39:54.221329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:40.640 [2024-07-11 15:39:54.221358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.005 ms 00:31:40.640 [2024-07-11 15:39:54.221375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.640 [2024-07-11 15:39:54.221406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.640 [2024-07-11 15:39:54.221419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:40.640 [2024-07-11 15:39:54.221430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:40.640 [2024-07-11 15:39:54.221439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.640 [2024-07-11 15:39:54.221485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.640 [2024-07-11 15:39:54.221498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:40.640 [2024-07-11 15:39:54.221508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:40.640 [2024-07-11 15:39:54.221516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.640 [2024-07-11 15:39:54.221550] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:40.640 [2024-07-11 15:39:54.221581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130560 / 261120 wr_cnt: 1 state: open 00:31:40.640 [2024-07-11 15:39:54.221593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:40.640 [2024-07-11 15:39:54.221604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.221982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:40.641 [2024-07-11 15:39:54.222642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:40.642 [2024-07-11 15:39:54.222652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:40.642 [2024-07-11 15:39:54.222662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:40.642 [2024-07-11 15:39:54.222672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:40.642 [2024-07-11 15:39:54.222682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:40.642 [2024-07-11 15:39:54.222706] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:40.642 [2024-07-11 15:39:54.222718] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c8a1d95c-c33b-4de9-9bd6-bd82ad45b6a1 00:31:40.642 [2024-07-11 15:39:54.222728] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130560 00:31:40.642 [2024-07-11 15:39:54.222738] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130592 00:31:40.642 [2024-07-11 15:39:54.222747] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130560 00:31:40.642 [2024-07-11 15:39:54.222757] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:31:40.642 [2024-07-11 15:39:54.222766] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:40.642 [2024-07-11 15:39:54.222777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:40.642 [2024-07-11 15:39:54.222786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:40.642 [2024-07-11 15:39:54.222795] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:40.642 [2024-07-11 15:39:54.222803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:40.642 [2024-07-11 15:39:54.222813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.642 [2024-07-11 15:39:54.222823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:40.642 [2024-07-11 15:39:54.222837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.264 ms 00:31:40.642 [2024-07-11 15:39:54.222847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.642 [2024-07-11 15:39:54.236246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.642 [2024-07-11 15:39:54.236277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:40.642 [2024-07-11 15:39:54.236307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.380 ms 00:31:40.642 [2024-07-11 15:39:54.236317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.642 [2024-07-11 15:39:54.236722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.642 [2024-07-11 15:39:54.236749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:40.642 [2024-07-11 15:39:54.236762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:31:40.642 [2024-07-11 15:39:54.236771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.902 [2024-07-11 15:39:54.266668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.902 [2024-07-11 15:39:54.266703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:40.902 [2024-07-11 15:39:54.266733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.902 [2024-07-11 15:39:54.266742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.902 [2024-07-11 15:39:54.266797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.902 [2024-07-11 15:39:54.266810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:40.902 [2024-07-11 15:39:54.266819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.902 [2024-07-11 15:39:54.266828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.902 [2024-07-11 15:39:54.266881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.902 [2024-07-11 15:39:54.266898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:40.902 [2024-07-11 15:39:54.266939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.902 [2024-07-11 15:39:54.266964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.902 [2024-07-11 15:39:54.266987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.902 [2024-07-11 15:39:54.267000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:40.902 [2024-07-11 15:39:54.267010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.902 [2024-07-11 15:39:54.267020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.902 [2024-07-11 15:39:54.341500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.902 [2024-07-11 15:39:54.341552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:40.902 [2024-07-11 15:39:54.341582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.902 [2024-07-11 15:39:54.341591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.902 [2024-07-11 15:39:54.405521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.902 [2024-07-11 15:39:54.405566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:40.902 [2024-07-11 15:39:54.405597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.902 [2024-07-11 15:39:54.405606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.902 [2024-07-11 15:39:54.405688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.902 [2024-07-11 15:39:54.405703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:40.902 [2024-07-11 15:39:54.405713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.902 [2024-07-11 15:39:54.405722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.902 [2024-07-11 15:39:54.405759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.902 [2024-07-11 15:39:54.405772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:40.902 [2024-07-11 15:39:54.405788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.902 [2024-07-11 15:39:54.405797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.902 [2024-07-11 15:39:54.405928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.902 [2024-07-11 15:39:54.405950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:40.902 [2024-07-11 15:39:54.405962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.902 [2024-07-11 15:39:54.405996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.902 [2024-07-11 15:39:54.406065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.902 [2024-07-11 15:39:54.406083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:40.902 [2024-07-11 15:39:54.406100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.902 [2024-07-11 15:39:54.406126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.902 [2024-07-11 15:39:54.406195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.902 [2024-07-11 15:39:54.406219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:40.902 [2024-07-11 15:39:54.406231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.902 [2024-07-11 15:39:54.406252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.902 [2024-07-11 15:39:54.406301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.902 [2024-07-11 15:39:54.406317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:40.902 [2024-07-11 15:39:54.406334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.902 [2024-07-11 15:39:54.406344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.902 [2024-07-11 15:39:54.406511] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 205.567 ms, result 0 00:31:42.280 00:31:42.280 00:31:42.280 15:39:55 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:31:42.540 [2024-07-11 15:39:55.919842] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:42.540 [2024-07-11 15:39:55.920013] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88096 ] 00:31:42.540 [2024-07-11 15:39:56.089282] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.799 [2024-07-11 15:39:56.234776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.059 [2024-07-11 15:39:56.489771] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:43.059 [2024-07-11 15:39:56.489851] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:43.059 [2024-07-11 15:39:56.645200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.059 [2024-07-11 15:39:56.645260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:43.059 [2024-07-11 15:39:56.645292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:43.059 [2024-07-11 15:39:56.645302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.059 [2024-07-11 15:39:56.645363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.059 [2024-07-11 15:39:56.645382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:43.059 [2024-07-11 15:39:56.645392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:31:43.059 [2024-07-11 15:39:56.645405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.059 [2024-07-11 15:39:56.645431] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:43.059 [2024-07-11 15:39:56.646429] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:43.059 [2024-07-11 15:39:56.646497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.059 [2024-07-11 15:39:56.646513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:43.059 [2024-07-11 15:39:56.646524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.070 ms 00:31:43.059 [2024-07-11 15:39:56.646534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.059 [2024-07-11 15:39:56.646976] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:43.059 [2024-07-11 15:39:56.647016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.059 [2024-07-11 15:39:56.647066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:43.059 [2024-07-11 15:39:56.647079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:31:43.059 [2024-07-11 15:39:56.647111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.059 [2024-07-11 15:39:56.647166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.059 [2024-07-11 15:39:56.647182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:43.059 [2024-07-11 15:39:56.647209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:31:43.059 [2024-07-11 15:39:56.647219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.059 [2024-07-11 15:39:56.647632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.059 [2024-07-11 15:39:56.647658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:43.059 [2024-07-11 15:39:56.647671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:31:43.059 [2024-07-11 15:39:56.647684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.059 [2024-07-11 15:39:56.647758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.059 [2024-07-11 15:39:56.647776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:43.059 [2024-07-11 15:39:56.647787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:31:43.059 [2024-07-11 15:39:56.647796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.060 [2024-07-11 15:39:56.647829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.060 [2024-07-11 15:39:56.647843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:43.060 [2024-07-11 15:39:56.647854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:43.060 [2024-07-11 15:39:56.647864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.060 [2024-07-11 15:39:56.647894] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:43.060 [2024-07-11 15:39:56.651734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.060 [2024-07-11 15:39:56.651780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:43.060 [2024-07-11 15:39:56.651813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.846 ms 00:31:43.060 [2024-07-11 15:39:56.651822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.060 [2024-07-11 15:39:56.651858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.060 [2024-07-11 15:39:56.651871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:43.060 [2024-07-11 15:39:56.651881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:43.060 [2024-07-11 15:39:56.651889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.060 [2024-07-11 15:39:56.651939] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:43.060 [2024-07-11 15:39:56.651965] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:43.060 [2024-07-11 15:39:56.652001] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:43.060 [2024-07-11 15:39:56.652053] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:31:43.060 [2024-07-11 15:39:56.652180] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:43.060 [2024-07-11 15:39:56.652197] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:43.060 [2024-07-11 15:39:56.652210] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:31:43.060 [2024-07-11 15:39:56.652224] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:43.060 [2024-07-11 15:39:56.652236] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:43.060 [2024-07-11 15:39:56.652247] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:43.060 [2024-07-11 15:39:56.652257] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:43.060 [2024-07-11 15:39:56.652267] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:43.060 [2024-07-11 15:39:56.652282] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:43.060 [2024-07-11 15:39:56.652293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.060 [2024-07-11 15:39:56.652303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:43.060 [2024-07-11 15:39:56.652314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:31:43.060 [2024-07-11 15:39:56.652324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.060 [2024-07-11 15:39:56.652406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.060 [2024-07-11 15:39:56.652419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:43.060 [2024-07-11 15:39:56.652430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:31:43.060 [2024-07-11 15:39:56.652439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.060 [2024-07-11 15:39:56.652556] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:43.060 [2024-07-11 15:39:56.652581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:43.060 [2024-07-11 15:39:56.652594] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:43.060 [2024-07-11 15:39:56.652604] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.060 [2024-07-11 15:39:56.652614] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:43.060 [2024-07-11 15:39:56.652624] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:43.060 [2024-07-11 15:39:56.652633] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:43.060 [2024-07-11 15:39:56.652643] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:43.060 [2024-07-11 15:39:56.652652] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:43.060 [2024-07-11 15:39:56.652661] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:43.060 [2024-07-11 15:39:56.652670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:43.060 [2024-07-11 15:39:56.652679] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:43.060 [2024-07-11 15:39:56.652687] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:43.060 [2024-07-11 15:39:56.652697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:43.060 [2024-07-11 15:39:56.652706] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:43.060 [2024-07-11 15:39:56.652715] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.060 [2024-07-11 15:39:56.652724] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:43.060 [2024-07-11 15:39:56.652733] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:43.060 [2024-07-11 15:39:56.652742] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.060 [2024-07-11 15:39:56.652751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:43.060 [2024-07-11 15:39:56.652761] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:43.060 [2024-07-11 15:39:56.652770] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:43.060 [2024-07-11 15:39:56.652791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:43.060 [2024-07-11 15:39:56.652801] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:43.060 [2024-07-11 15:39:56.652810] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:43.060 [2024-07-11 15:39:56.652819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:43.060 [2024-07-11 15:39:56.652828] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:43.060 [2024-07-11 15:39:56.652836] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:43.060 [2024-07-11 15:39:56.652845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:43.060 [2024-07-11 15:39:56.652854] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:43.060 [2024-07-11 15:39:56.652863] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:43.060 [2024-07-11 15:39:56.652872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:43.060 [2024-07-11 15:39:56.652881] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:43.060 [2024-07-11 15:39:56.652890] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:43.060 [2024-07-11 15:39:56.652899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:43.060 [2024-07-11 15:39:56.652908] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:43.060 [2024-07-11 15:39:56.652917] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:43.060 [2024-07-11 15:39:56.652926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:43.060 [2024-07-11 15:39:56.652950] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:43.060 [2024-07-11 15:39:56.652959] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.060 [2024-07-11 15:39:56.652969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:43.060 [2024-07-11 15:39:56.652979] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:43.060 [2024-07-11 15:39:56.652989] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.060 [2024-07-11 15:39:56.652997] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:43.060 [2024-07-11 15:39:56.653007] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:43.060 [2024-07-11 15:39:56.653018] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:43.060 [2024-07-11 15:39:56.653028] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.060 [2024-07-11 15:39:56.653073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:43.060 [2024-07-11 15:39:56.653085] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:43.060 [2024-07-11 15:39:56.653094] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:43.060 [2024-07-11 15:39:56.653104] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:43.060 [2024-07-11 15:39:56.653114] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:43.060 [2024-07-11 15:39:56.653124] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:43.060 [2024-07-11 15:39:56.653134] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:43.060 [2024-07-11 15:39:56.653147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:43.060 [2024-07-11 15:39:56.653159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:43.060 [2024-07-11 15:39:56.653170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:43.060 [2024-07-11 15:39:56.653180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:43.060 [2024-07-11 15:39:56.653191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:43.060 [2024-07-11 15:39:56.653201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:43.060 [2024-07-11 15:39:56.653212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:43.060 [2024-07-11 15:39:56.653222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:43.060 [2024-07-11 15:39:56.653233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:43.060 [2024-07-11 15:39:56.653243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:43.060 [2024-07-11 15:39:56.653254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:43.060 [2024-07-11 15:39:56.653264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:43.060 [2024-07-11 15:39:56.653274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:43.060 [2024-07-11 15:39:56.653285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:43.060 [2024-07-11 15:39:56.653295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:43.060 [2024-07-11 15:39:56.653306] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:43.060 [2024-07-11 15:39:56.653323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:43.061 [2024-07-11 15:39:56.653335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:43.061 [2024-07-11 15:39:56.653345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:43.061 [2024-07-11 15:39:56.653371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:43.061 [2024-07-11 15:39:56.653381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:43.061 [2024-07-11 15:39:56.653407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.061 [2024-07-11 15:39:56.653418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:43.061 [2024-07-11 15:39:56.653429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:31:43.061 [2024-07-11 15:39:56.653439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.321 [2024-07-11 15:39:56.685671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.321 [2024-07-11 15:39:56.685733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:43.321 [2024-07-11 15:39:56.685765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.178 ms 00:31:43.321 [2024-07-11 15:39:56.685775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.321 [2024-07-11 15:39:56.685870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.321 [2024-07-11 15:39:56.685885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:43.321 [2024-07-11 15:39:56.685896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:31:43.321 [2024-07-11 15:39:56.685904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.321 [2024-07-11 15:39:56.717404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.321 [2024-07-11 15:39:56.717447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:43.321 [2024-07-11 15:39:56.717477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.326 ms 00:31:43.321 [2024-07-11 15:39:56.717486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.321 [2024-07-11 15:39:56.717532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.321 [2024-07-11 15:39:56.717547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:43.321 [2024-07-11 15:39:56.717563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:43.321 [2024-07-11 15:39:56.717572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.321 [2024-07-11 15:39:56.717705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.321 [2024-07-11 15:39:56.717753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:43.321 [2024-07-11 15:39:56.717764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:31:43.321 [2024-07-11 15:39:56.717774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.321 [2024-07-11 15:39:56.717909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.321 [2024-07-11 15:39:56.717928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:43.321 [2024-07-11 15:39:56.717939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:31:43.321 [2024-07-11 15:39:56.717953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.321 [2024-07-11 15:39:56.731415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.321 [2024-07-11 15:39:56.731480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:43.321 [2024-07-11 15:39:56.731514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.409 ms 00:31:43.321 [2024-07-11 15:39:56.731524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.321 [2024-07-11 15:39:56.731652] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:43.321 [2024-07-11 15:39:56.731672] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:43.321 [2024-07-11 15:39:56.731684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.321 [2024-07-11 15:39:56.731710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:43.321 [2024-07-11 15:39:56.731736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:31:43.321 [2024-07-11 15:39:56.731745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.321 [2024-07-11 15:39:56.742554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.322 [2024-07-11 15:39:56.742586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:43.322 [2024-07-11 15:39:56.742615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.784 ms 00:31:43.322 [2024-07-11 15:39:56.742624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.322 [2024-07-11 15:39:56.742725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.322 [2024-07-11 15:39:56.742739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:43.322 [2024-07-11 15:39:56.742749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:31:43.322 [2024-07-11 15:39:56.742757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.322 [2024-07-11 15:39:56.742807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.322 [2024-07-11 15:39:56.742837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:43.322 [2024-07-11 15:39:56.742886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:43.322 [2024-07-11 15:39:56.742896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.322 [2024-07-11 15:39:56.743617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.322 [2024-07-11 15:39:56.743658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:43.322 [2024-07-11 15:39:56.743671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:31:43.322 [2024-07-11 15:39:56.743680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.322 [2024-07-11 15:39:56.743701] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:43.322 [2024-07-11 15:39:56.743715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.322 [2024-07-11 15:39:56.743725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:43.322 [2024-07-11 15:39:56.743750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:43.322 [2024-07-11 15:39:56.743775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.322 [2024-07-11 15:39:56.754161] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:43.322 [2024-07-11 15:39:56.754358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.322 [2024-07-11 15:39:56.754376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:43.322 [2024-07-11 15:39:56.754387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.560 ms 00:31:43.322 [2024-07-11 15:39:56.754395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.322 [2024-07-11 15:39:56.756336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.322 [2024-07-11 15:39:56.756378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:43.322 [2024-07-11 15:39:56.756405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.918 ms 00:31:43.322 [2024-07-11 15:39:56.756419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.322 [2024-07-11 15:39:56.756489] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:31:43.322 [2024-07-11 15:39:56.756961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.322 [2024-07-11 15:39:56.756996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:43.322 [2024-07-11 15:39:56.757009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 00:31:43.322 [2024-07-11 15:39:56.757030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.322 [2024-07-11 15:39:56.757064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.322 [2024-07-11 15:39:56.757078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:43.322 [2024-07-11 15:39:56.757089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:43.322 [2024-07-11 15:39:56.757105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.322 [2024-07-11 15:39:56.757139] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:43.322 [2024-07-11 15:39:56.757153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.322 [2024-07-11 15:39:56.757163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:43.322 [2024-07-11 15:39:56.757173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:43.322 [2024-07-11 15:39:56.757182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.322 [2024-07-11 15:39:56.782248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.322 [2024-07-11 15:39:56.782315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:43.322 [2024-07-11 15:39:56.782352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.045 ms 00:31:43.322 [2024-07-11 15:39:56.782362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.322 [2024-07-11 15:39:56.782427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.322 [2024-07-11 15:39:56.782444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:43.322 [2024-07-11 15:39:56.782455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:31:43.322 [2024-07-11 15:39:56.782463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.322 [2024-07-11 15:39:56.791560] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 144.141 ms, result 0 00:32:28.232  Copying: 24/1024 [MB] (24 MBps) Copying: 46/1024 [MB] (22 MBps) Copying: 69/1024 [MB] (22 MBps) Copying: 92/1024 [MB] (22 MBps) Copying: 115/1024 [MB] (23 MBps) Copying: 138/1024 [MB] (22 MBps) Copying: 161/1024 [MB] (23 MBps) Copying: 184/1024 [MB] (22 MBps) Copying: 207/1024 [MB] (22 MBps) Copying: 230/1024 [MB] (23 MBps) Copying: 252/1024 [MB] (22 MBps) Copying: 275/1024 [MB] (23 MBps) Copying: 298/1024 [MB] (22 MBps) Copying: 321/1024 [MB] (23 MBps) Copying: 345/1024 [MB] (23 MBps) Copying: 368/1024 [MB] (23 MBps) Copying: 391/1024 [MB] (23 MBps) Copying: 415/1024 [MB] (23 MBps) Copying: 438/1024 [MB] (23 MBps) Copying: 462/1024 [MB] (23 MBps) Copying: 485/1024 [MB] (23 MBps) Copying: 509/1024 [MB] (23 MBps) Copying: 532/1024 [MB] (23 MBps) Copying: 555/1024 [MB] (23 MBps) Copying: 579/1024 [MB] (23 MBps) Copying: 602/1024 [MB] (23 MBps) Copying: 625/1024 [MB] (23 MBps) Copying: 648/1024 [MB] (22 MBps) Copying: 671/1024 [MB] (22 MBps) Copying: 694/1024 [MB] (22 MBps) Copying: 717/1024 [MB] (23 MBps) Copying: 740/1024 [MB] (23 MBps) Copying: 762/1024 [MB] (22 MBps) Copying: 785/1024 [MB] (22 MBps) Copying: 808/1024 [MB] (22 MBps) Copying: 830/1024 [MB] (22 MBps) Copying: 853/1024 [MB] (23 MBps) Copying: 876/1024 [MB] (22 MBps) Copying: 899/1024 [MB] (22 MBps) Copying: 921/1024 [MB] (22 MBps) Copying: 943/1024 [MB] (22 MBps) Copying: 966/1024 [MB] (22 MBps) Copying: 989/1024 [MB] (22 MBps) Copying: 1012/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-07-11 15:40:41.710220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.232 [2024-07-11 15:40:41.710311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:28.232 [2024-07-11 15:40:41.710339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:28.232 [2024-07-11 15:40:41.710350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.232 [2024-07-11 15:40:41.710416] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:28.232 [2024-07-11 15:40:41.713558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.232 [2024-07-11 15:40:41.713602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:28.232 [2024-07-11 15:40:41.713631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.120 ms 00:32:28.232 [2024-07-11 15:40:41.713640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.232 [2024-07-11 15:40:41.713860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.232 [2024-07-11 15:40:41.713891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:28.232 [2024-07-11 15:40:41.713908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:32:28.232 [2024-07-11 15:40:41.713918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.232 [2024-07-11 15:40:41.713950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.232 [2024-07-11 15:40:41.713963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:28.232 [2024-07-11 15:40:41.714002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:28.232 [2024-07-11 15:40:41.714011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.232 [2024-07-11 15:40:41.714081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.232 [2024-07-11 15:40:41.714098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:28.232 [2024-07-11 15:40:41.714109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:32:28.232 [2024-07-11 15:40:41.714119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.232 [2024-07-11 15:40:41.714142] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:28.232 [2024-07-11 15:40:41.714158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:32:28.232 [2024-07-11 15:40:41.714171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:28.232 [2024-07-11 15:40:41.714750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.714992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.715001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.715011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.715021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.715043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.715055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.715065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.715075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.715085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.715098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.715109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.715119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.715129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.715139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.715149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.715158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.715168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.715178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.715188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.715198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.715208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:28.233 [2024-07-11 15:40:41.715225] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:28.233 [2024-07-11 15:40:41.715235] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c8a1d95c-c33b-4de9-9bd6-bd82ad45b6a1 00:32:28.233 [2024-07-11 15:40:41.715246] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:32:28.233 [2024-07-11 15:40:41.715255] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3360 00:32:28.233 [2024-07-11 15:40:41.715264] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3328 00:32:28.233 [2024-07-11 15:40:41.715275] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0096 00:32:28.233 [2024-07-11 15:40:41.715285] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:28.233 [2024-07-11 15:40:41.715294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:28.233 [2024-07-11 15:40:41.715304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:28.233 [2024-07-11 15:40:41.715312] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:28.233 [2024-07-11 15:40:41.715321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:28.233 [2024-07-11 15:40:41.715330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.233 [2024-07-11 15:40:41.715345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:28.233 [2024-07-11 15:40:41.715355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.189 ms 00:32:28.233 [2024-07-11 15:40:41.715365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.233 [2024-07-11 15:40:41.728983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.233 [2024-07-11 15:40:41.729016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:28.233 [2024-07-11 15:40:41.729091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.598 ms 00:32:28.233 [2024-07-11 15:40:41.729102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.233 [2024-07-11 15:40:41.729509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:28.233 [2024-07-11 15:40:41.729539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:28.233 [2024-07-11 15:40:41.729552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:32:28.233 [2024-07-11 15:40:41.729562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.233 [2024-07-11 15:40:41.758426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.233 [2024-07-11 15:40:41.758462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:28.233 [2024-07-11 15:40:41.758491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.233 [2024-07-11 15:40:41.758506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.233 [2024-07-11 15:40:41.758556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.233 [2024-07-11 15:40:41.758569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:28.233 [2024-07-11 15:40:41.758578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.233 [2024-07-11 15:40:41.758587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.233 [2024-07-11 15:40:41.758687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.233 [2024-07-11 15:40:41.758705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:28.233 [2024-07-11 15:40:41.758716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.233 [2024-07-11 15:40:41.758725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.233 [2024-07-11 15:40:41.758752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.233 [2024-07-11 15:40:41.758764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:28.233 [2024-07-11 15:40:41.758775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.233 [2024-07-11 15:40:41.758784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.233 [2024-07-11 15:40:41.833525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.233 [2024-07-11 15:40:41.833578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:28.233 [2024-07-11 15:40:41.833608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.233 [2024-07-11 15:40:41.833624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.491 [2024-07-11 15:40:41.900189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.491 [2024-07-11 15:40:41.900241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:28.491 [2024-07-11 15:40:41.900273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.491 [2024-07-11 15:40:41.900283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.491 [2024-07-11 15:40:41.900355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.491 [2024-07-11 15:40:41.900369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:28.491 [2024-07-11 15:40:41.900380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.491 [2024-07-11 15:40:41.900388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.491 [2024-07-11 15:40:41.900427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.491 [2024-07-11 15:40:41.900447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:28.491 [2024-07-11 15:40:41.900472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.491 [2024-07-11 15:40:41.900496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.491 [2024-07-11 15:40:41.900602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.491 [2024-07-11 15:40:41.900620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:28.491 [2024-07-11 15:40:41.900631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.491 [2024-07-11 15:40:41.900641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.491 [2024-07-11 15:40:41.900693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.491 [2024-07-11 15:40:41.900714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:28.491 [2024-07-11 15:40:41.900732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.491 [2024-07-11 15:40:41.900742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.491 [2024-07-11 15:40:41.900784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.491 [2024-07-11 15:40:41.900799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:28.491 [2024-07-11 15:40:41.900810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.491 [2024-07-11 15:40:41.900819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.491 [2024-07-11 15:40:41.900864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:28.491 [2024-07-11 15:40:41.900885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:28.491 [2024-07-11 15:40:41.900895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:28.491 [2024-07-11 15:40:41.900904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:28.491 [2024-07-11 15:40:41.901084] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 190.798 ms, result 0 00:32:29.427 00:32:29.427 00:32:29.427 15:40:42 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:31.330 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:31.330 15:40:44 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:31.330 15:40:44 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:32:31.330 15:40:44 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:31.330 15:40:44 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:31.330 15:40:44 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:31.330 15:40:44 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 86521 00:32:31.330 15:40:44 ftl.ftl_restore_fast -- common/autotest_common.sh@948 -- # '[' -z 86521 ']' 00:32:31.330 15:40:44 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # kill -0 86521 00:32:31.330 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (86521) - No such process 00:32:31.330 15:40:44 ftl.ftl_restore_fast -- common/autotest_common.sh@975 -- # echo 'Process with pid 86521 is not found' 00:32:31.330 Process with pid 86521 is not found 00:32:31.330 15:40:44 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:32:31.330 Remove shared memory files 00:32:31.330 15:40:44 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:31.330 15:40:44 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:32:31.330 15:40:44 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_c8a1d95c-c33b-4de9-9bd6-bd82ad45b6a1_band_md /dev/hugepages/ftl_c8a1d95c-c33b-4de9-9bd6-bd82ad45b6a1_l2p_l1 /dev/hugepages/ftl_c8a1d95c-c33b-4de9-9bd6-bd82ad45b6a1_l2p_l2 /dev/hugepages/ftl_c8a1d95c-c33b-4de9-9bd6-bd82ad45b6a1_l2p_l2_ctx /dev/hugepages/ftl_c8a1d95c-c33b-4de9-9bd6-bd82ad45b6a1_nvc_md /dev/hugepages/ftl_c8a1d95c-c33b-4de9-9bd6-bd82ad45b6a1_p2l_pool /dev/hugepages/ftl_c8a1d95c-c33b-4de9-9bd6-bd82ad45b6a1_sb /dev/hugepages/ftl_c8a1d95c-c33b-4de9-9bd6-bd82ad45b6a1_sb_shm /dev/hugepages/ftl_c8a1d95c-c33b-4de9-9bd6-bd82ad45b6a1_trim_bitmap /dev/hugepages/ftl_c8a1d95c-c33b-4de9-9bd6-bd82ad45b6a1_trim_log /dev/hugepages/ftl_c8a1d95c-c33b-4de9-9bd6-bd82ad45b6a1_trim_md /dev/hugepages/ftl_c8a1d95c-c33b-4de9-9bd6-bd82ad45b6a1_vmap 00:32:31.330 15:40:44 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:32:31.330 15:40:44 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:31.330 15:40:44 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:32:31.330 00:32:31.330 real 3m28.738s 00:32:31.330 user 3m16.466s 00:32:31.330 sys 0m13.924s 00:32:31.330 15:40:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:31.330 ************************************ 00:32:31.330 END TEST ftl_restore_fast 00:32:31.330 15:40:44 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:32:31.330 ************************************ 00:32:31.330 15:40:44 ftl -- common/autotest_common.sh@1142 -- # return 0 00:32:31.330 15:40:44 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:31.330 15:40:44 ftl -- ftl/ftl.sh@14 -- # killprocess 78628 00:32:31.330 15:40:44 ftl -- common/autotest_common.sh@948 -- # '[' -z 78628 ']' 00:32:31.330 15:40:44 ftl -- common/autotest_common.sh@952 -- # kill -0 78628 00:32:31.330 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (78628) - No such process 00:32:31.330 Process with pid 78628 is not found 00:32:31.330 15:40:44 ftl -- common/autotest_common.sh@975 -- # echo 'Process with pid 78628 is not found' 00:32:31.330 15:40:44 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:31.330 15:40:44 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=88582 00:32:31.330 15:40:44 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:31.330 15:40:44 ftl -- ftl/ftl.sh@20 -- # waitforlisten 88582 00:32:31.330 15:40:44 ftl -- common/autotest_common.sh@829 -- # '[' -z 88582 ']' 00:32:31.330 15:40:44 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.330 15:40:44 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:31.331 15:40:44 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:31.331 15:40:44 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:31.331 15:40:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:31.331 [2024-07-11 15:40:44.825336] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:31.331 [2024-07-11 15:40:44.825494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88582 ] 00:32:31.589 [2024-07-11 15:40:44.985225] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.849 [2024-07-11 15:40:45.213293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.418 15:40:45 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:32.418 15:40:45 ftl -- common/autotest_common.sh@862 -- # return 0 00:32:32.418 15:40:45 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:32.677 nvme0n1 00:32:32.677 15:40:46 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:32:32.677 15:40:46 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:32.677 15:40:46 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:32.936 15:40:46 ftl -- ftl/common.sh@28 -- # stores=a3864cd5-bbff-48c3-bbbf-f483bfe95047 00:32:32.936 15:40:46 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:32:32.936 15:40:46 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a3864cd5-bbff-48c3-bbbf-f483bfe95047 00:32:32.936 15:40:46 ftl -- ftl/ftl.sh@23 -- # killprocess 88582 00:32:32.936 15:40:46 ftl -- common/autotest_common.sh@948 -- # '[' -z 88582 ']' 00:32:32.936 15:40:46 ftl -- common/autotest_common.sh@952 -- # kill -0 88582 00:32:32.936 15:40:46 ftl -- common/autotest_common.sh@953 -- # uname 00:32:32.936 15:40:46 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:32.936 15:40:46 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88582 00:32:32.936 killing process with pid 88582 00:32:32.936 15:40:46 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:32.936 15:40:46 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:32.936 15:40:46 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88582' 00:32:32.936 15:40:46 ftl -- common/autotest_common.sh@967 -- # kill 88582 00:32:32.936 15:40:46 ftl -- common/autotest_common.sh@972 -- # wait 88582 00:32:34.837 15:40:48 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:34.837 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:34.837 Waiting for block devices as requested 00:32:35.096 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:35.096 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:35.096 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:35.355 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:40.624 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:40.624 15:40:53 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:40.624 15:40:53 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:40.624 Remove shared memory files 00:32:40.624 15:40:53 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:40.624 15:40:53 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:40.624 15:40:53 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:40.624 15:40:53 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:40.624 15:40:53 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:40.624 00:32:40.624 real 15m20.422s 00:32:40.624 user 18m3.515s 00:32:40.624 sys 1m40.751s 00:32:40.624 15:40:53 ftl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:40.624 15:40:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:40.624 ************************************ 00:32:40.624 END TEST ftl 00:32:40.624 ************************************ 00:32:40.624 15:40:53 -- common/autotest_common.sh@1142 -- # return 0 00:32:40.624 15:40:53 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:40.624 15:40:53 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:40.624 15:40:53 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:40.624 15:40:53 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:40.624 15:40:53 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:40.624 15:40:53 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:40.624 15:40:53 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:40.624 15:40:53 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:40.624 15:40:53 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:32:40.624 15:40:53 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:32:40.624 15:40:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:40.624 15:40:53 -- common/autotest_common.sh@10 -- # set +x 00:32:40.624 15:40:53 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:32:40.624 15:40:53 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:40.624 15:40:53 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:40.624 15:40:53 -- common/autotest_common.sh@10 -- # set +x 00:32:41.999 INFO: APP EXITING 00:32:41.999 INFO: killing all VMs 00:32:41.999 INFO: killing vhost app 00:32:41.999 INFO: EXIT DONE 00:32:42.257 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:42.516 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:42.775 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:42.775 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:42.775 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:43.033 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:43.291 Cleaning 00:32:43.291 Removing: /var/run/dpdk/spdk0/config 00:32:43.291 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:43.291 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:43.291 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:43.291 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:43.291 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:43.291 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:43.291 Removing: /var/run/dpdk/spdk0 00:32:43.291 Removing: /var/run/dpdk/spdk_pid61798 00:32:43.555 Removing: /var/run/dpdk/spdk_pid62003 00:32:43.555 Removing: /var/run/dpdk/spdk_pid62213 00:32:43.555 Removing: /var/run/dpdk/spdk_pid62312 00:32:43.555 Removing: /var/run/dpdk/spdk_pid62357 00:32:43.555 Removing: /var/run/dpdk/spdk_pid62485 00:32:43.555 Removing: /var/run/dpdk/spdk_pid62503 00:32:43.555 Removing: /var/run/dpdk/spdk_pid62678 00:32:43.555 Removing: /var/run/dpdk/spdk_pid62775 00:32:43.555 Removing: /var/run/dpdk/spdk_pid62863 00:32:43.555 Removing: /var/run/dpdk/spdk_pid62966 00:32:43.555 Removing: /var/run/dpdk/spdk_pid63055 00:32:43.555 Removing: /var/run/dpdk/spdk_pid63100 00:32:43.555 Removing: /var/run/dpdk/spdk_pid63137 00:32:43.555 Removing: /var/run/dpdk/spdk_pid63199 00:32:43.555 Removing: /var/run/dpdk/spdk_pid63305 00:32:43.555 Removing: /var/run/dpdk/spdk_pid63752 00:32:43.555 Removing: /var/run/dpdk/spdk_pid63824 00:32:43.555 Removing: /var/run/dpdk/spdk_pid63892 00:32:43.555 Removing: /var/run/dpdk/spdk_pid63908 00:32:43.555 Removing: /var/run/dpdk/spdk_pid64032 00:32:43.555 Removing: /var/run/dpdk/spdk_pid64048 00:32:43.555 Removing: /var/run/dpdk/spdk_pid64168 00:32:43.555 Removing: /var/run/dpdk/spdk_pid64184 00:32:43.555 Removing: /var/run/dpdk/spdk_pid64248 00:32:43.555 Removing: /var/run/dpdk/spdk_pid64266 00:32:43.555 Removing: /var/run/dpdk/spdk_pid64330 00:32:43.555 Removing: /var/run/dpdk/spdk_pid64348 00:32:43.555 Removing: /var/run/dpdk/spdk_pid64511 00:32:43.555 Removing: /var/run/dpdk/spdk_pid64553 00:32:43.555 Removing: /var/run/dpdk/spdk_pid64629 00:32:43.555 Removing: /var/run/dpdk/spdk_pid64699 00:32:43.555 Removing: /var/run/dpdk/spdk_pid64735 00:32:43.555 Removing: /var/run/dpdk/spdk_pid64808 00:32:43.555 Removing: /var/run/dpdk/spdk_pid64854 00:32:43.555 Removing: /var/run/dpdk/spdk_pid64895 00:32:43.555 Removing: /var/run/dpdk/spdk_pid64944 00:32:43.555 Removing: /var/run/dpdk/spdk_pid64985 00:32:43.556 Removing: /var/run/dpdk/spdk_pid65026 00:32:43.556 Removing: /var/run/dpdk/spdk_pid65077 00:32:43.556 Removing: /var/run/dpdk/spdk_pid65119 00:32:43.556 Removing: /var/run/dpdk/spdk_pid65160 00:32:43.556 Removing: /var/run/dpdk/spdk_pid65201 00:32:43.556 Removing: /var/run/dpdk/spdk_pid65248 00:32:43.556 Removing: /var/run/dpdk/spdk_pid65289 00:32:43.556 Removing: /var/run/dpdk/spdk_pid65335 00:32:43.556 Removing: /var/run/dpdk/spdk_pid65376 00:32:43.556 Removing: /var/run/dpdk/spdk_pid65423 00:32:43.556 Removing: /var/run/dpdk/spdk_pid65464 00:32:43.556 Removing: /var/run/dpdk/spdk_pid65505 00:32:43.556 Removing: /var/run/dpdk/spdk_pid65560 00:32:43.556 Removing: /var/run/dpdk/spdk_pid65604 00:32:43.556 Removing: /var/run/dpdk/spdk_pid65645 00:32:43.556 Removing: /var/run/dpdk/spdk_pid65687 00:32:43.556 Removing: /var/run/dpdk/spdk_pid65769 00:32:43.556 Removing: /var/run/dpdk/spdk_pid65880 00:32:43.556 Removing: /var/run/dpdk/spdk_pid66047 00:32:43.556 Removing: /var/run/dpdk/spdk_pid66131 00:32:43.556 Removing: /var/run/dpdk/spdk_pid66173 00:32:43.556 Removing: /var/run/dpdk/spdk_pid66628 00:32:43.556 Removing: /var/run/dpdk/spdk_pid66733 00:32:43.556 Removing: /var/run/dpdk/spdk_pid66837 00:32:43.556 Removing: /var/run/dpdk/spdk_pid66890 00:32:43.556 Removing: /var/run/dpdk/spdk_pid66921 00:32:43.556 Removing: /var/run/dpdk/spdk_pid66997 00:32:43.556 Removing: /var/run/dpdk/spdk_pid67630 00:32:43.556 Removing: /var/run/dpdk/spdk_pid67672 00:32:43.556 Removing: /var/run/dpdk/spdk_pid68170 00:32:43.556 Removing: /var/run/dpdk/spdk_pid68263 00:32:43.556 Removing: /var/run/dpdk/spdk_pid68378 00:32:43.556 Removing: /var/run/dpdk/spdk_pid68431 00:32:43.556 Removing: /var/run/dpdk/spdk_pid68462 00:32:43.556 Removing: /var/run/dpdk/spdk_pid68493 00:32:43.556 Removing: /var/run/dpdk/spdk_pid70344 00:32:43.556 Removing: /var/run/dpdk/spdk_pid70481 00:32:43.556 Removing: /var/run/dpdk/spdk_pid70489 00:32:43.556 Removing: /var/run/dpdk/spdk_pid70508 00:32:43.556 Removing: /var/run/dpdk/spdk_pid70543 00:32:43.556 Removing: /var/run/dpdk/spdk_pid70551 00:32:43.556 Removing: /var/run/dpdk/spdk_pid70563 00:32:43.556 Removing: /var/run/dpdk/spdk_pid70602 00:32:43.556 Removing: /var/run/dpdk/spdk_pid70612 00:32:43.556 Removing: /var/run/dpdk/spdk_pid70624 00:32:43.556 Removing: /var/run/dpdk/spdk_pid70665 00:32:43.556 Removing: /var/run/dpdk/spdk_pid70669 00:32:43.556 Removing: /var/run/dpdk/spdk_pid70681 00:32:43.556 Removing: /var/run/dpdk/spdk_pid72030 00:32:43.556 Removing: /var/run/dpdk/spdk_pid72126 00:32:43.556 Removing: /var/run/dpdk/spdk_pid73516 00:32:43.556 Removing: /var/run/dpdk/spdk_pid74866 00:32:43.829 Removing: /var/run/dpdk/spdk_pid74981 00:32:43.829 Removing: /var/run/dpdk/spdk_pid75091 00:32:43.829 Removing: /var/run/dpdk/spdk_pid75200 00:32:43.829 Removing: /var/run/dpdk/spdk_pid75333 00:32:43.829 Removing: /var/run/dpdk/spdk_pid75407 00:32:43.829 Removing: /var/run/dpdk/spdk_pid75547 00:32:43.829 Removing: /var/run/dpdk/spdk_pid75912 00:32:43.829 Removing: /var/run/dpdk/spdk_pid75953 00:32:43.829 Removing: /var/run/dpdk/spdk_pid76424 00:32:43.829 Removing: /var/run/dpdk/spdk_pid76610 00:32:43.829 Removing: /var/run/dpdk/spdk_pid76714 00:32:43.829 Removing: /var/run/dpdk/spdk_pid76825 00:32:43.829 Removing: /var/run/dpdk/spdk_pid76879 00:32:43.829 Removing: /var/run/dpdk/spdk_pid76907 00:32:43.829 Removing: /var/run/dpdk/spdk_pid77188 00:32:43.829 Removing: /var/run/dpdk/spdk_pid77243 00:32:43.829 Removing: /var/run/dpdk/spdk_pid77321 00:32:43.829 Removing: /var/run/dpdk/spdk_pid77704 00:32:43.829 Removing: /var/run/dpdk/spdk_pid77853 00:32:43.829 Removing: /var/run/dpdk/spdk_pid78628 00:32:43.829 Removing: /var/run/dpdk/spdk_pid78758 00:32:43.829 Removing: /var/run/dpdk/spdk_pid78941 00:32:43.829 Removing: /var/run/dpdk/spdk_pid79044 00:32:43.829 Removing: /var/run/dpdk/spdk_pid79393 00:32:43.829 Removing: /var/run/dpdk/spdk_pid79662 00:32:43.829 Removing: /var/run/dpdk/spdk_pid80001 00:32:43.829 Removing: /var/run/dpdk/spdk_pid80195 00:32:43.829 Removing: /var/run/dpdk/spdk_pid80342 00:32:43.829 Removing: /var/run/dpdk/spdk_pid80400 00:32:43.829 Removing: /var/run/dpdk/spdk_pid80545 00:32:43.829 Removing: /var/run/dpdk/spdk_pid80576 00:32:43.829 Removing: /var/run/dpdk/spdk_pid80634 00:32:43.829 Removing: /var/run/dpdk/spdk_pid80835 00:32:43.829 Removing: /var/run/dpdk/spdk_pid81067 00:32:43.829 Removing: /var/run/dpdk/spdk_pid81509 00:32:43.829 Removing: /var/run/dpdk/spdk_pid81967 00:32:43.829 Removing: /var/run/dpdk/spdk_pid82439 00:32:43.829 Removing: /var/run/dpdk/spdk_pid82970 00:32:43.829 Removing: /var/run/dpdk/spdk_pid83119 00:32:43.829 Removing: /var/run/dpdk/spdk_pid83219 00:32:43.829 Removing: /var/run/dpdk/spdk_pid83929 00:32:43.829 Removing: /var/run/dpdk/spdk_pid83994 00:32:43.829 Removing: /var/run/dpdk/spdk_pid84475 00:32:43.829 Removing: /var/run/dpdk/spdk_pid84900 00:32:43.829 Removing: /var/run/dpdk/spdk_pid85426 00:32:43.829 Removing: /var/run/dpdk/spdk_pid85536 00:32:43.829 Removing: /var/run/dpdk/spdk_pid85585 00:32:43.829 Removing: /var/run/dpdk/spdk_pid85657 00:32:43.829 Removing: /var/run/dpdk/spdk_pid85718 00:32:43.829 Removing: /var/run/dpdk/spdk_pid85788 00:32:43.829 Removing: /var/run/dpdk/spdk_pid86005 00:32:43.829 Removing: /var/run/dpdk/spdk_pid86074 00:32:43.829 Removing: /var/run/dpdk/spdk_pid86147 00:32:43.829 Removing: /var/run/dpdk/spdk_pid86235 00:32:43.829 Removing: /var/run/dpdk/spdk_pid86270 00:32:43.829 Removing: /var/run/dpdk/spdk_pid86344 00:32:43.829 Removing: /var/run/dpdk/spdk_pid86521 00:32:43.829 Removing: /var/run/dpdk/spdk_pid86745 00:32:43.829 Removing: /var/run/dpdk/spdk_pid87183 00:32:43.829 Removing: /var/run/dpdk/spdk_pid87643 00:32:43.829 Removing: /var/run/dpdk/spdk_pid88096 00:32:43.829 Removing: /var/run/dpdk/spdk_pid88582 00:32:43.829 Clean 00:32:43.829 15:40:57 -- common/autotest_common.sh@1451 -- # return 0 00:32:43.829 15:40:57 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:32:43.829 15:40:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:43.829 15:40:57 -- common/autotest_common.sh@10 -- # set +x 00:32:44.112 15:40:57 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:32:44.112 15:40:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:44.112 15:40:57 -- common/autotest_common.sh@10 -- # set +x 00:32:44.112 15:40:57 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:44.112 15:40:57 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:44.112 15:40:57 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:44.112 15:40:57 -- spdk/autotest.sh@391 -- # hash lcov 00:32:44.112 15:40:57 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:44.112 15:40:57 -- spdk/autotest.sh@393 -- # hostname 00:32:44.112 15:40:57 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:44.112 geninfo: WARNING: invalid characters removed from testname! 00:33:06.049 15:41:18 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:08.580 15:41:21 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:11.122 15:41:24 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:13.026 15:41:26 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:15.561 15:41:28 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:18.097 15:41:31 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:20.640 15:41:33 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:20.640 15:41:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:20.640 15:41:33 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:20.640 15:41:33 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:20.640 15:41:33 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:20.640 15:41:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.640 15:41:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.640 15:41:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.640 15:41:33 -- paths/export.sh@5 -- $ export PATH 00:33:20.640 15:41:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.640 15:41:33 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:33:20.640 15:41:33 -- common/autobuild_common.sh@444 -- $ date +%s 00:33:20.640 15:41:33 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720712493.XXXXXX 00:33:20.640 15:41:33 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720712493.HrGhlS 00:33:20.640 15:41:33 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:33:20.640 15:41:33 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:33:20.640 15:41:33 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:33:20.640 15:41:33 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:33:20.640 15:41:33 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:33:20.640 15:41:33 -- common/autobuild_common.sh@460 -- $ get_config_params 00:33:20.640 15:41:33 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:33:20.640 15:41:33 -- common/autotest_common.sh@10 -- $ set +x 00:33:20.640 15:41:33 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:33:20.640 15:41:33 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:33:20.640 15:41:33 -- pm/common@17 -- $ local monitor 00:33:20.640 15:41:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:20.640 15:41:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:20.640 15:41:33 -- pm/common@25 -- $ sleep 1 00:33:20.640 15:41:33 -- pm/common@21 -- $ date +%s 00:33:20.640 15:41:33 -- pm/common@21 -- $ date +%s 00:33:20.640 15:41:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720712493 00:33:20.640 15:41:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720712493 00:33:20.640 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720712493_collect-vmstat.pm.log 00:33:20.640 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720712493_collect-cpu-load.pm.log 00:33:21.209 15:41:34 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:33:21.209 15:41:34 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:33:21.209 15:41:34 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:33:21.209 15:41:34 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:21.209 15:41:34 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:21.209 15:41:34 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:21.209 15:41:34 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:21.209 15:41:34 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:21.209 15:41:34 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:21.209 15:41:34 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:21.209 15:41:34 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:21.209 15:41:34 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:21.209 15:41:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:21.209 15:41:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:21.209 15:41:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:21.209 15:41:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:33:21.469 15:41:34 -- pm/common@44 -- $ pid=90290 00:33:21.469 15:41:34 -- pm/common@50 -- $ kill -TERM 90290 00:33:21.469 15:41:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:21.469 15:41:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:33:21.469 15:41:34 -- pm/common@44 -- $ pid=90291 00:33:21.469 15:41:34 -- pm/common@50 -- $ kill -TERM 90291 00:33:21.469 + [[ -n 5188 ]] 00:33:21.469 + sudo kill 5188 00:33:21.482 [Pipeline] } 00:33:21.504 [Pipeline] // timeout 00:33:21.512 [Pipeline] } 00:33:21.530 [Pipeline] // stage 00:33:21.535 [Pipeline] } 00:33:21.553 [Pipeline] // catchError 00:33:21.563 [Pipeline] stage 00:33:21.565 [Pipeline] { (Stop VM) 00:33:21.580 [Pipeline] sh 00:33:21.860 + vagrant halt 00:33:24.391 ==> default: Halting domain... 00:33:30.965 [Pipeline] sh 00:33:31.246 + vagrant destroy -f 00:33:33.780 ==> default: Removing domain... 00:33:34.731 [Pipeline] sh 00:33:35.018 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:33:35.027 [Pipeline] } 00:33:35.045 [Pipeline] // stage 00:33:35.050 [Pipeline] } 00:33:35.067 [Pipeline] // dir 00:33:35.073 [Pipeline] } 00:33:35.090 [Pipeline] // wrap 00:33:35.097 [Pipeline] } 00:33:35.112 [Pipeline] // catchError 00:33:35.122 [Pipeline] stage 00:33:35.125 [Pipeline] { (Epilogue) 00:33:35.140 [Pipeline] sh 00:33:35.421 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:40.819 [Pipeline] catchError 00:33:40.821 [Pipeline] { 00:33:40.830 [Pipeline] sh 00:33:41.104 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:41.362 Artifacts sizes are good 00:33:41.370 [Pipeline] } 00:33:41.380 [Pipeline] // catchError 00:33:41.389 [Pipeline] archiveArtifacts 00:33:41.394 Archiving artifacts 00:33:41.538 [Pipeline] cleanWs 00:33:41.549 [WS-CLEANUP] Deleting project workspace... 00:33:41.549 [WS-CLEANUP] Deferred wipeout is used... 00:33:41.555 [WS-CLEANUP] done 00:33:41.557 [Pipeline] } 00:33:41.574 [Pipeline] // stage 00:33:41.579 [Pipeline] } 00:33:41.594 [Pipeline] // node 00:33:41.599 [Pipeline] End of Pipeline 00:33:41.647 Finished: SUCCESS